→ Of course, we have to be sure that it is bounded.
Extreme point = a basic feasible solution (BFS)
A BFS corresponds to a basis matrix
Consider a method that moves to an adjacent vertex
💡
즉, extreme point를 찾고 optimality condition을 check하고 아닌 경우 adjacent vertex로 이동한다는 것. 이때 convex이기 때문에 optimality condition을 만족하면 global minimum임을 보장할 수 있다.
Basic variable : ≥0 (0 is possible because of degeneracy)
Non-basic variable : =0
Basis matrix update:
Entering variable: non-basic → basic
Leaving variable: basic → non-basic
Algorithm
mincTx
subject to
Ax=bxi≥0,∀i
Normally, the number of column is larger than the number of row.
W.L.O.G. A∈L(Rn,Rm)(n≥m)
Assume that we already know the initial starting point
💡
We can get the initial point of this algorithm by Phase-1 which is similar but a slightly different linear programming. This algorithm will be discussed later.
We only consider the columns in the basic.
W.L.O.G
A=[B∣N]x=[xBxN]
Find the solution for
BxB=b
or equivalently
xB=B−1b
What can we do when given point is not a optimal point?
Revisit our objective function
mincTx=cBTxB+cNTxN
subject to
Ax=bxi≥0,∀i
Our given point xB=B−1b−B−1NxN
Therefore our objective function is
mincTx=cBTB−1b+(cNT−cBTB−1N)xN
we call cNT−cBTB−1N to reduced cost and cBTB−1b is a current objective function value.
Note that if a non-basic variable increases it s value from zero, it reduced cost indicates the change per unit in the objective function.
💡
Entering variable : variable that have most negative reduced cost. However, we can’t guarantee that it is always optimal since there are other constraints also. Other more sophisticated ways of choosing the entering variable are possible, but they may require additional computations.
💡
추가적으로 B가 invertible하기 위해서는 적어도 정방행렬이어야 한다. 즉, basic variable의 개수는 주어진 constraint의 개수와 동일하게 잡아야 한다.
Basically, the steps of the simplex algorithm are given below
The optimality test : Computer the reduced cost
(cNT−cBTB−1N)
If for all coefficient are all non-negative, the current basis is optimal
If not, select a variable that has a negative reduced cost value and make it as a entering variable. For simplicity, let say the entering variable is i’th component of xN
Select leaving variable : Increase the value of
xB=B−1b−B−1NixNi
until there are no violating constraints, where Ni is the i’th column of N.
Update : Update the basis matrix B and the vector of basic variables xB
Note that the column that related to x2 is (1,1,0)T.
💡
s1 is leaving variable since it is firstly violated when we increase the value of x2
Example 2
A=[211062]x=x1x2x3b=[41]
In addition,
xB=[x1x2]xN=[x3]
Let the reduced cost of x3 is negative, therefore we can make x3 as an entering variable. But we have to decide how much we could increase x3 without making any problem with constraints. Ideally it is better to increase the value of x3 but it is impossible because other constraints are also depend on x3.
[2110][x1x2]+[62][x3]=[41]
[x1x2]=[2110]−1[41]−[2110]−1[62]x3
[x1x2]=[12]−[22]x3
For example, since x1=1−2x3, if we take x3 greater than 21, x1 goes to negative which makes a problem.
x1=1−2x3x2=2−2x3
Therefore, we can increase x3 until 21. In this scenario, x1 have to change to non-basic variable. We call it leaving variable.
💡
Note that the simplex algorithm is terminated when all variables in reduced cost vector are non-negative.
Simplex Method in Tableau Form
A couple of More Details
Generating an initial point
In general, the origin is not necessarily a feasible BFS. Therefore, we have to find the initial starting point.
Let say our optimization problem is as follows
mincTx
subject to
Ax=bxi≥0,∀i
We can define different linear programming to get a initial point of the original optimization problem
min1Ty
subject to
Ax+Iy=bx≥0,y≥0
This problem is also a linear programming so that we can apply the simplex method. Moreover, we already know the extreme point of this problem
x=0,y=b
It is called Phase-1.
But what if we can’t find the y=0? In this case, it implies there is no feasible solution in the original optimization problem.
Unbounded problem
The simplex method there is the possibility that the problem will be unbounded.
For example,
xB=B−1b−B−1NixNi
when a any value in B−1Ni is positive, the basic variable will decrease as the entering variable xNi increases.
However, if there is no positive value in B−1Ni, we can increase xNi whatever we want. That means the feasible region is unbounded. Therefore, we can detect it by using simplex method when our original optimization problem is unbounded.
Example
Consider this optimization problem
min−x1−2x2
subject to
−x1+x2+x3=2−2x1+x2+x4=1x1,x2,x3,x4≥0
Take
xB=(x1,x2)TxN=(x3,x4)T
At this iterations, xB=(1,3)T and the reduced cost for the non-basic variables are (5,−3)T.
Since it has a negative value −3, the current point is not a optimal. Therefore, we take x4 as an entering variable.
Since,
B−1N2=[12−1−1][01]=[−1−1]
That means we can increase x4 without limit, so the objective function can be decreased without limit. Therefore, there is no finite solution to this linear program.
Degeneracy
The version of the simplex method that we have described can fail, cycling endlessly without any improvement in the objective and without finding a solution. This can only happen on degenerate problems, problems where a basic variable is equal to zero in some basis.
Therefore, we take x2 as a intering variable. Now we have to check the leaving variable
xB=B−1b−B−1N1x2=[20]−[01]x2
Therefore, s4 is a leaving variable and its value set to 0.
In this case, we have to improvement in objective function. Moreover, it can be fluctuate forever even though there is no improvement in objective function.
💡
이러한 일이 근본적으로 벌어지는 이유는 basic variable 중에 해당 값이 0인 것이 존재할 때 발생한다.
Optimality and the Lagrangian Function
We have discussed optimality condition by using the Lagrangian function before. Actually, this approach is more general. Therefore, we can learn this approach even though our optimization problem is linear program.
We can change our objective function as
mincTx
subject to
Ax=b(λ)Ix≥0(π)
Define
L(x,λ,π)=cTx−λT(Ax−b)−πT(Ix−0)
Use KKT condition.
Stationary condition
∇xL(x,λ,π)=0⇒c−λAT−π=0
Primal feasibility
Ax∗=bIx∗≥0
Dual feasibility
π≥0
Complementary slackness
πTIx=0
How can we connect a KKT condition and termination condition of simplex method?
Recall that a simplex method can be terminated when all variables in reduced cost vector is non-negative.
cNT−cBTB−1N
Take
πN=cNT−cBTB−1NπB=0
Then, it satisfy complementary slackness condition and dual feasibility condition.