→ Of course, we have to be sure that it is bounded.
Extreme point = a basic feasible solution (BFS)
A BFS corresponds to a basis matrix
Consider a method that moves to an adjacent vertex
Basic variable : ≥0 (0 is possible because of degeneracy)
Non-basic variable : =0
Basis matrix update:
Entering variable: non-basic → basic
Leaving variable: basic → non-basic
Algorithm
subject to
Normally, the number of column is larger than the number of row.
W.L.O.G. A∈L(Rn,Rm)(n≥m)
Assume that we already know the initial starting point
We only consider the columns in the basic.
W.L.O.G
Find the solution for
or equivalently
What can we do when given point is not a optimal point?
Revisit our objective function
subject to
Our given point xB=B−1b−B−1NxN
Therefore our objective function is
we call cNT−cBTB−1N to reduced cost and cBTB−1b is a current objective function value.
Note that if a non-basic variable increases it s value from zero, it reduced cost indicates the change per unit in the objective function.
Basically, the steps of the simplex algorithm are given below
The optimality test : Computer the reduced cost
If for all coefficient are all non-negative, the current basis is optimal
If not, select a variable that has a negative reduced cost value and make it as a entering variable. For simplicity, let say the entering variable is i’th component of xN
Select leaving variable : Increase the value of
until there are no violating constraints, where Ni is the i’th column of N.
Update : Update the basis matrix B and the vector of basic variables xB
Repeat 1 ~ 3 until current basis is optimal.
Example 1
subject to
which is equivalent to
subject to
where
Take
Therefore,
and it reduced cost is
Therefore, we take x2 as a entering variable
Note that the column that related to x2 is (1,1,0)T.
Example 2
In addition,
Let the reduced cost of x3 is negative, therefore we can make x3 as an entering variable. But we have to decide how much we could increase x3 without making any problem with constraints. Ideally it is better to increase the value of x3 but it is impossible because other constraints are also depend on x3.
For example, since x1=1−2x3, if we take x3 greater than 21, x1 goes to negative which makes a problem.
Therefore, we can increase x3 until 21. In this scenario, x1 have to change to non-basic variable. We call it leaving variable.
Simplex Method in Tableau Form
A couple of More Details
Generating an initial point
In general, the origin is not necessarily a feasible BFS. Therefore, we have to find the initial starting point.
Let say our optimization problem is as follows
subject to
We can define different linear programming to get a initial point of the original optimization problem
subject to
This problem is also a linear programming so that we can apply the simplex method. Moreover, we already know the extreme point of this problem
It is called Phase-1.
But what if we can’t find the y=0? In this case, it implies there is no feasible solution in the original optimization problem.
Unbounded problem
The simplex method there is the possibility that the problem will be unbounded.
For example,
when a any value in B−1Ni is positive, the basic variable will decrease as the entering variable xNi increases.
However, if there is no positive value in B−1Ni, we can increase xNi whatever we want. That means the feasible region is unbounded. Therefore, we can detect it by using simplex method when our original optimization problem is unbounded.
Example
Consider this optimization problem
subject to
Take
At this iterations, xB=(1,3)T and the reduced cost for the non-basic variables are (5,−3)T.
Since it has a negative value −3, the current point is not a optimal. Therefore, we take x4 as an entering variable.
Since,
That means we can increase x4 without limit, so the objective function can be decreased without limit. Therefore, there is no finite solution to this linear program.
Degeneracy
The version of the simplex method that we have described can fail, cycling endlessly without any improvement in the objective and without finding a solution. This can only happen on degenerate problems, problems where a basic variable is equal to zero in some basis.
Example
This about this example,
subejct to
It is equivalent to
subejct to
where
Suppose we take x1,s4 as a basis. Then
Calculate the reduced cost
Therefore, we take x2 as a intering variable. Now we have to check the leaving variable
Therefore, s4 is a leaving variable and its value set to 0.
In this case, we have to improvement in objective function. Moreover, it can be fluctuate forever even though there is no improvement in objective function.
Optimality and the Lagrangian Function
We have discussed optimality condition by using the Lagrangian function before. Actually, this approach is more general. Therefore, we can learn this approach even though our optimization problem is linear program.
We can change our objective function as
subject to
Define
Use KKT condition.
Stationary condition
Primal feasibility
Dual feasibility
Complementary slackness
How can we connect a KKT condition and termination condition of simplex method?
Recall that a simplex method can be terminated when all variables in reduced cost vector is non-negative.
Take
Then, it satisfy complementary slackness condition and dual feasibility condition.
Since 12 and 13,
Therefore,
If we take
KKT condition and a termination condition for simplex method holds simultaneously.