Computer Science/Optimization
-
Algorithmic IdeaThe optimum is an extreme point (aka vertex)→ Of course, we have to be sure that it is bounded.Extreme point = a basic feasible solution (BFS)A BFS corresponds to a basis matrixConsider a method that moves to an adjacent vertex💡즉, extreme point를 찾고 optimality condition을 check하고 아닌 경우 adjacent vertex로 이동한다는 것. 이때 convex이기 때문에 optimality condition을 만족하면 global minimum임을 보장할 수 있다.Ba..
13. Simplex methodAlgorithmic IdeaThe optimum is an extreme point (aka vertex)→ Of course, we have to be sure that it is bounded.Extreme point = a basic feasible solution (BFS)A BFS corresponds to a basis matrixConsider a method that moves to an adjacent vertex💡즉, extreme point를 찾고 optimality condition을 check하고 아닌 경우 adjacent vertex로 이동한다는 것. 이때 convex이기 때문에 optimality condition을 만족하면 global minimum임을 보장할 수 있다.Ba..
2023.12.19 -
IntroductionLinear programs can be studied both algebraically and geometrically. The two approaches are equivalent, but one or the other may be more convenient for answering a particular question about a linear program.The algebraic point of view is based on writing the linear program in a particular way, called standard form. Then the coefficient matrix of the constraints of the linear program ..
12. Geometry of Linear ProgrammingIntroductionLinear programs can be studied both algebraically and geometrically. The two approaches are equivalent, but one or the other may be more convenient for answering a particular question about a linear program.The algebraic point of view is based on writing the linear program in a particular way, called standard form. Then the coefficient matrix of the constraints of the linear program ..
2023.12.16 -
IntroductionIn the last article, we discussed the use of feasible-point methods for solving constrained optimization problems. These methods are based on minimizing the Lagrangian function while attempting to attain and maintain feasibility. When inequality constraints are present, these methods solve a sequence of subproblems with a changing active set until a solution to the original constrain..
11. Penalty and Barrier MethodsIntroductionIn the last article, we discussed the use of feasible-point methods for solving constrained optimization problems. These methods are based on minimizing the Lagrangian function while attempting to attain and maintain feasibility. When inequality constraints are present, these methods solve a sequence of subproblems with a changing active set until a solution to the original constrain..
2023.12.14 -
IntroductionIn this article, we examine methods that solve constrained optimization problems by attempting to remain feasible at every iteration. If all the constraints are linear, maintaining feasibility is straightforward. However, when non-linear constraints are present, then more elaborate procedures are required. Although it strives to maintain feasibility at every iteration, they do not al..
10. Feasible-Point MethodsIntroductionIn this article, we examine methods that solve constrained optimization problems by attempting to remain feasible at every iteration. If all the constraints are linear, maintaining feasibility is straightforward. However, when non-linear constraints are present, then more elaborate procedures are required. Although it strives to maintain feasibility at every iteration, they do not al..
2023.12.13 -
IntroductionIn constrained optimization problem, the constraints can beLinear or non-linearEqualities or inequalitiesGenerally, our objective function and constraints can be represented as followsminf(x)\min f(x)minf(x)subject to\text{subject to}subject togi(x)=0,i=Ehi(x)≥0,i=Ig_i(x) = 0, i = \mathcal E \\ h_i(x) \ge 0, i = \mathcal Igi(x)=0,i=Ehi(x)≥0,i=ITherefore, it is much harder than un..
9. Optimality Conditions for Constrained OptimizationIntroductionIn constrained optimization problem, the constraints can beLinear or non-linearEqualities or inequalitiesGenerally, our objective function and constraints can be represented as followsminf(x)\min f(x)minf(x)subject to\text{subject to}subject togi(x)=0,i=Ehi(x)≥0,i=Ig_i(x) = 0, i = \mathcal E \\ h_i(x) \ge 0, i = \mathcal Igi(x)=0,i=Ehi(x)≥0,i=ITherefore, it is much harder than un..
2023.12.13 -
Termination RulesIdeally,∇f(xk)=0\nabla f(x_k) = 0∇f(xk)=0and∇2f(xk)\nabla^2f(x_k)∇2f(xk)is positive semi-definite.But, there are two reasons why this is not realistic.It is unlikely that the calculated value of the gradient would ever be exactly zero because of rounding errors in computer calculations.No algorithm is guaranteed to find such a point in a finite amount of time.As an alternative..
8. Termination RulesTermination RulesIdeally,∇f(xk)=0\nabla f(x_k) = 0∇f(xk)=0and∇2f(xk)\nabla^2f(x_k)∇2f(xk)is positive semi-definite.But, there are two reasons why this is not realistic.It is unlikely that the calculated value of the gradient would ever be exactly zero because of rounding errors in computer calculations.No algorithm is guaranteed to find such a point in a finite amount of time.As an alternative..
2023.12.13 -
MotivationSo far, all methods considered require at least first-order derivativeBut what ifThe function is not differentiable everywhere?It is very time-consuming to perform derivative calculation?That is why Derivative-Free methods came.→ search methods without the use of any derivative informationCompass SearchIntuition : Try a few points around the current solutionBetter point found : move, a..
7. Derivative-Free MethodsMotivationSo far, all methods considered require at least first-order derivativeBut what ifThe function is not differentiable everywhere?It is very time-consuming to perform derivative calculation?That is why Derivative-Free methods came.→ search methods without the use of any derivative informationCompass SearchIntuition : Try a few points around the current solutionBetter point found : move, a..
2023.12.13 -
What is the Least square method?The objective consists of adjusting the parameters of a model function to best fit a data set. A simple data set consists of nnn point (xi,yi)(x_i, y_i)(xi,yi) where xix_ixi is an independent variable and yiy_iyi is a dependent variable whose value is found by observation. The model function has the form f(x,β)f(x, \beta)f(x,β), where mmm adjustable para..
6. Least-Squares Data FittingWhat is the Least square method?The objective consists of adjusting the parameters of a model function to best fit a data set. A simple data set consists of nnn point (xi,yi)(x_i, y_i)(xi,yi) where xix_ixi is an independent variable and yiy_iyi is a dependent variable whose value is found by observation. The model function has the form f(x,β)f(x, \beta)f(x,β), where mmm adjustable para..
2023.11.27