전체 글 보기
-
Newton’s Methods for MinimizationThe basic idea can be derived using the first-order necessary optimality condition∇f(x)=0\nabla f(x) = 0∇f(x)=0Alternatively, consider second-order Taylor series approximationf(x+p)≈f(x)+∇f(x)p+12pT∇2f(x)pf(x+p) \approx f(x) + \nabla f(x)p + \frac{1}{2}p^T\nabla^2 f(x)pf(x+p)≈f(x)+∇f(x)p+21pT∇2f(x)pLet q(p)=f(x)+∇f(x)p+12pT∇2f(x)pq(p) = f(x) + \nabla f(x)p + \fr..
3. Guaranteeing DescentNewton’s Methods for MinimizationThe basic idea can be derived using the first-order necessary optimality condition∇f(x)=0\nabla f(x) = 0∇f(x)=0Alternatively, consider second-order Taylor series approximationf(x+p)≈f(x)+∇f(x)p+12pT∇2f(x)pf(x+p) \approx f(x) + \nabla f(x)p + \frac{1}{2}p^T\nabla^2 f(x)pf(x+p)≈f(x)+∇f(x)p+21pT∇2f(x)pLet q(p)=f(x)+∇f(x)p+12pT∇2f(x)pq(p) = f(x) + \nabla f(x)p + \fr..
2023.11.27 -
Optimality Conditions: PreliminariesLocal minima and maxima have one thing in common∇f(x)=0\nabla f(x) = 0∇f(x)=0First-order Necessary ConditionFrom now on fff is differentiable and that its first and second derivatives are continuous for every x∈Xx\in Xx∈X where XXX is a domain of fffIf x∗x^*x∗ is a local minimum, then ∇f(x∗)=0\nabla f(x^*) = 0∇f(x∗)=0Not a sufficient condition, since it c..
2. Optimality Conditions, Convexity, Newton’s Method for EquationsOptimality Conditions: PreliminariesLocal minima and maxima have one thing in common∇f(x)=0\nabla f(x) = 0∇f(x)=0First-order Necessary ConditionFrom now on fff is differentiable and that its first and second derivatives are continuous for every x∈Xx\in Xx∈X where XXX is a domain of fffIf x∗x^*x∗ is a local minimum, then ∇f(x∗)=0\nabla f(x^*) = 0∇f(x∗)=0Not a sufficient condition, since it c..
2023.11.27 -
A Generic Optimization Modelminf(x1,x2,…,xn)\min f(x_1, x_2, \dots, x_n)minf(x1,x2,…,xn)subject to\text{subject to}subject tog1(x1,x2,…,xn)≤b1g2(x1,x2,…,xn)≤b2⋮gm(x1,x2,…,xn)≤bm\begin{align}g_1(x_1, x_2, \dots, x_n) \le b_1 \\ g_2(x_1, x_2, \dots, x_n) \le b_2 \\ \vdots \\ g_m(x_1, x_2, \dots, x_n) \le b_m \end{align}g1(x1,x2,…,xn)≤b1g2(x1,x2,…,xn)≤b2⋮gm(x1,x2,…,xn)≤bmIn co..
1. BasicsA Generic Optimization Modelminf(x1,x2,…,xn)\min f(x_1, x_2, \dots, x_n)minf(x1,x2,…,xn)subject to\text{subject to}subject tog1(x1,x2,…,xn)≤b1g2(x1,x2,…,xn)≤b2⋮gm(x1,x2,…,xn)≤bm\begin{align}g_1(x_1, x_2, \dots, x_n) \le b_1 \\ g_2(x_1, x_2, \dots, x_n) \le b_2 \\ \vdots \\ g_m(x_1, x_2, \dots, x_n) \le b_m \end{align}g1(x1,x2,…,xn)≤b1g2(x1,x2,…,xn)≤b2⋮gm(x1,x2,…,xn)≤bmIn co..
2023.11.27 -
Last Update : 2023/11/4
Principle of Mathematical Analysis SolutionLast Update : 2023/11/4
2023.11.05 -
Natural computationAlgorithms derived from observations of nature phenomenaSimulationsto learn more about these phenomenato learn new ways to solve computational problemsNatural computation and Machine learningWhat is learning?The ability to improve over time, based on experienceWhy? : Solutions to problems are not always programmableExamplesHandwritten character recognitionAdaptive control of p..
8. Learning from natureNatural computationAlgorithms derived from observations of nature phenomenaSimulationsto learn more about these phenomenato learn new ways to solve computational problemsNatural computation and Machine learningWhat is learning?The ability to improve over time, based on experienceWhy? : Solutions to problems are not always programmableExamplesHandwritten character recognitionAdaptive control of p..
2023.10.24 -
Strong vs Weak AIThe Strong AI hypothesis is the philosophical position that a computer program that causes a machine to behave exactly like a human being would also give the machine subjective conscious experience and a mindOn the other hand, Weak AI is the philosophical position that an AI that appears to behave exactly like a human being is only a simulation of human cognitive function, and i..
7. Philosophy, Ethics, and Safety of AIStrong vs Weak AIThe Strong AI hypothesis is the philosophical position that a computer program that causes a machine to behave exactly like a human being would also give the machine subjective conscious experience and a mindOn the other hand, Weak AI is the philosophical position that an AI that appears to behave exactly like a human being is only a simulation of human cognitive function, and i..
2023.10.24 -
DefinitionBayesian networks are networks of random variables→ Each variable is associated with a node in the networkIf we know of the existence of conditional independencies between variables we can simplify the network by removing edgesThis leads to the simplified network💡여기에서 연결된 edge가 없으면 conditional independence를 함축한다.Causal NetworksWhile correlation (association) between variables is an imp..
6. Bayesian NetworksDefinitionBayesian networks are networks of random variables→ Each variable is associated with a node in the networkIf we know of the existence of conditional independencies between variables we can simplify the network by removing edgesThis leads to the simplified network💡여기에서 연결된 edge가 없으면 conditional independence를 함축한다.Causal NetworksWhile correlation (association) between variables is an imp..
2023.10.24 -
Markov ChainsMarkov Chains appear all over computer science, mathematics and AI. Just some of the applicationsBiology : birth death process, disease spreadingBiology : DNA/RNA/Protein sequence analysisSpeech recognitionControl theory, filtering💡LLM(Large Language Model)의 경우 Markov Chain의 아이디어를 많이 차용하였다. 지금은 Deep learning algorithm으로 대체되기는 하였다.Representing Markov ChainMarkov chain과 Markov process..
5. Markov ChainsMarkov ChainsMarkov Chains appear all over computer science, mathematics and AI. Just some of the applicationsBiology : birth death process, disease spreadingBiology : DNA/RNA/Protein sequence analysisSpeech recognitionControl theory, filtering💡LLM(Large Language Model)의 경우 Markov Chain의 아이디어를 많이 차용하였다. 지금은 Deep learning algorithm으로 대체되기는 하였다.Representing Markov ChainMarkov chain과 Markov process..
2023.10.24