전체 글 보기
-
IntroductionIn the last article, we discussed the use of feasible-point methods for solving constrained optimization problems. These methods are based on minimizing the Lagrangian function while attempting to attain and maintain feasibility. When inequality constraints are present, these methods solve a sequence of subproblems with a changing active set until a solution to the original constrain..
11. Penalty and Barrier MethodsIntroductionIn the last article, we discussed the use of feasible-point methods for solving constrained optimization problems. These methods are based on minimizing the Lagrangian function while attempting to attain and maintain feasibility. When inequality constraints are present, these methods solve a sequence of subproblems with a changing active set until a solution to the original constrain..
2023.12.14 -
IntroductionIn this article, we examine methods that solve constrained optimization problems by attempting to remain feasible at every iteration. If all the constraints are linear, maintaining feasibility is straightforward. However, when non-linear constraints are present, then more elaborate procedures are required. Although it strives to maintain feasibility at every iteration, they do not al..
10. Feasible-Point MethodsIntroductionIn this article, we examine methods that solve constrained optimization problems by attempting to remain feasible at every iteration. If all the constraints are linear, maintaining feasibility is straightforward. However, when non-linear constraints are present, then more elaborate procedures are required. Although it strives to maintain feasibility at every iteration, they do not al..
2023.12.13 -
IntroductionIn constrained optimization problem, the constraints can beLinear or non-linearEqualities or inequalitiesGenerally, our objective function and constraints can be represented as followsminf(x)\min f(x)minf(x)subject to\text{subject to}subject togi(x)=0,i=Ehi(x)≥0,i=Ig_i(x) = 0, i = \mathcal E \\ h_i(x) \ge 0, i = \mathcal Igi(x)=0,i=Ehi(x)≥0,i=ITherefore, it is much harder than un..
9. Optimality Conditions for Constrained OptimizationIntroductionIn constrained optimization problem, the constraints can beLinear or non-linearEqualities or inequalitiesGenerally, our objective function and constraints can be represented as followsminf(x)\min f(x)minf(x)subject to\text{subject to}subject togi(x)=0,i=Ehi(x)≥0,i=Ig_i(x) = 0, i = \mathcal E \\ h_i(x) \ge 0, i = \mathcal Igi(x)=0,i=Ehi(x)≥0,i=ITherefore, it is much harder than un..
2023.12.13 -
Termination RulesIdeally,∇f(xk)=0\nabla f(x_k) = 0∇f(xk)=0and∇2f(xk)\nabla^2f(x_k)∇2f(xk)is positive semi-definite.But, there are two reasons why this is not realistic.It is unlikely that the calculated value of the gradient would ever be exactly zero because of rounding errors in computer calculations.No algorithm is guaranteed to find such a point in a finite amount of time.As an alternative..
8. Termination RulesTermination RulesIdeally,∇f(xk)=0\nabla f(x_k) = 0∇f(xk)=0and∇2f(xk)\nabla^2f(x_k)∇2f(xk)is positive semi-definite.But, there are two reasons why this is not realistic.It is unlikely that the calculated value of the gradient would ever be exactly zero because of rounding errors in computer calculations.No algorithm is guaranteed to find such a point in a finite amount of time.As an alternative..
2023.12.13 -
MotivationSo far, all methods considered require at least first-order derivativeBut what ifThe function is not differentiable everywhere?It is very time-consuming to perform derivative calculation?That is why Derivative-Free methods came.→ search methods without the use of any derivative informationCompass SearchIntuition : Try a few points around the current solutionBetter point found : move, a..
7. Derivative-Free MethodsMotivationSo far, all methods considered require at least first-order derivativeBut what ifThe function is not differentiable everywhere?It is very time-consuming to perform derivative calculation?That is why Derivative-Free methods came.→ search methods without the use of any derivative informationCompass SearchIntuition : Try a few points around the current solutionBetter point found : move, a..
2023.12.13 -
What is the Least square method?The objective consists of adjusting the parameters of a model function to best fit a data set. A simple data set consists of nnn point (xi,yi)(x_i, y_i)(xi,yi) where xix_ixi is an independent variable and yiy_iyi is a dependent variable whose value is found by observation. The model function has the form f(x,β)f(x, \beta)f(x,β), where mmm adjustable para..
6. Least-Squares Data FittingWhat is the Least square method?The objective consists of adjusting the parameters of a model function to best fit a data set. A simple data set consists of nnn point (xi,yi)(x_i, y_i)(xi,yi) where xix_ixi is an independent variable and yiy_iyi is a dependent variable whose value is found by observation. The model function has the form f(x,β)f(x, \beta)f(x,β), where mmm adjustable para..
2023.11.27 -
IntroductionA principal advantage of Newton’s method is that it converges rapidly when the current estimate of the variables is close to the solution. However, it also has disadvantages, and overcoming these disadvantages has led to many other techniques.In particular, Newton’s method canfail to convergeconverge to a point that is not a stationary pointHowever, this kind of problem can be overco..
5. Methods for Unconstrained OptimizationIntroductionA principal advantage of Newton’s method is that it converges rapidly when the current estimate of the variables is close to the solution. However, it also has disadvantages, and overcoming these disadvantages has led to many other techniques.In particular, Newton’s method canfail to convergeconverge to a point that is not a stationary pointHowever, this kind of problem can be overco..
2023.11.27 -
Guaranteeing ConvergenceThe auxiliary techniques that are used to guarantee convergence attempt to rein in the optimization method when it is in danger of getting out of control, and they also try to avoid intervening when the optimization method is performing effectively.The term globalization strategy is used to distinguish the method used for the new estimate of the solution from the method f..
4. Guaranteeing ConvergenceGuaranteeing ConvergenceThe auxiliary techniques that are used to guarantee convergence attempt to rein in the optimization method when it is in danger of getting out of control, and they also try to avoid intervening when the optimization method is performing effectively.The term globalization strategy is used to distinguish the method used for the new estimate of the solution from the method f..
2023.11.27