Scipy.optimize.linprog Unable To Find A Feasible Starting Point Despite A Feasible Answer Clearly Exists
Solution 1:
This answer doesn't explain why it works. I hope someone more familiar with the linprog
code or with linear programming in general can give a more thorough answer.
I get a solution if I use the option bland=True
(see show_options
for documentation--scroll to the bottom for the linprog
options):
In [130]:linprog(c,A_ub,b_ub,A_eq,b_eq,options=dict(bland=True))Out[130]:status:0slack:array([3610.,6490.,11840.,0.,0.,14000.,10100.,0.,10000.,5000.,15450.,0.,13000.,0.,10000.,3000.,11000.,0.,12220.,0.,10000.])success:Truefun:-2683.6935269049131x:array([1.22573363e+00,2.00000000e+00,1.22404780e+00,3.71739130e+00,8.25688073e-02,2.00000000e+03,0.00000000e+00,0.00000000e+00,0.00000000e+00,0.00000000e+00,0.00000000e+00,0.00000000e+00,5.00000000e+03,0.00000000e+00,0.00000000e+00,0.00000000e+00,0.00000000e+00,0.00000000e+00,0.00000000e+00,0.00000000e+00,0.00000000e+00,0.00000000e+00,0.00000000e+00,2.00000000e+03,6.39000000e+03,0.00000000e+00,0.00000000e+00,0.00000000e+00,0.00000000e+00,1.84000000e+03,5.00000000e+03,0.00000000e+00,1.00000000e+04,0.00000000e+00,0.00000000e+00,0.00000000e+00,0.00000000e+00,1.00000000e+02,0.00000000e+00,0.00000000e+00,0.00000000e+00,0.00000000e+00,0.00000000e+00,-1.11022302e-12,0.00000000e+00,5.45000000e+03,0.00000000e+00,3.00000000e+03,0.00000000e+00,3.00000000e+03,0.00000000e+00,0.00000000e+00,0.00000000e+00,0.00000000e+00,0.00000000e+00,0.00000000e+00,0.00000000e+00,1.00000000e+03])message:'Optimization terminated successfully.'nit:50
One component is slightly negative (-1.11e-12). Presumably this is within the default tolerance. That can be cleaned up by lowering the tolerance (but note the change in x[19]
):
In [131]:linprog(c,A_ub,b_ub,A_eq,b_eq,options=dict(bland=True,tol=1e-15))Out[131]:status:0slack:array([3610.,6490.,11840.,0.,0.,14000.,10100.,0.,10000.,5000.,15450.,0.,13000.,0.,10000.,3000.,11000.,0.,12220.,0.,10000.])success:Truefun:-2683.693526904935x:array([1.22573363e+00,2.00000000e+00,0.00000000e+00,3.71739130e+00,8.25688073e-02,2.00000000e+03,0.00000000e+00,0.00000000e+00,0.00000000e+00,0.00000000e+00,0.00000000e+00,0.00000000e+00,5.00000000e+03,0.00000000e+00,0.00000000e+00,0.00000000e+00,0.00000000e+00,0.00000000e+00,0.00000000e+00,1.63900000e+04,0.00000000e+00,0.00000000e+00,0.00000000e+00,2.00000000e+03,6.39000000e+03,0.00000000e+00,0.00000000e+00,0.00000000e+00,0.00000000e+00,1.84000000e+03,5.00000000e+03,0.00000000e+00,1.00000000e+04,0.00000000e+00,0.00000000e+00,0.00000000e+00,0.00000000e+00,1.00000000e+02,0.00000000e+00,0.00000000e+00,0.00000000e+00,0.00000000e+00,0.00000000e+00,0.00000000e+00,0.00000000e+00,5.45000000e+03,0.00000000e+00,3.00000000e+03,0.00000000e+00,3.00000000e+03,0.00000000e+00,0.00000000e+00,0.00000000e+00,0.00000000e+00,0.00000000e+00,0.00000000e+00,0.00000000e+00,1.00000000e+03])message:'Optimization terminated successfully.'nit:51
Solution 2:
It seems like a tolerance issue.
I was able to "fix" it by importing the original linprog code, after I changed the tolerance (tol
parameter) from 10e-12
to 10e-8
in the "private" method _linprog_simplex
.
This parameter is passed to the method _pivot_col
, which reads
ma = np.ma.masked_where(T[-1, :-1] >=-tol, T[-1, :-1], copy=False)
if ma.count() ==0:
returnFalse, np.nan
if bland:
returnTrue, np.where(ma.mask ==False)[0][0]
returnTrue, np.ma.where(ma == ma.min())[0][0]
This is why bland
's rule passes the test, while the default one fails. I then tried to find if there is any default tolerance in the implementation of numpy.masked_where
. From there, it is not obvious what is the tolerance that is used, but other numpy functions, such as masked_values, have an absolute tolerance of 10e-8
by default.
I hope this helps.
Here is the result I am getting by changing the tolerance in _linprog_simplex
:
TrueTrueTruestatus:0slack:array([3610.,6490.,11840.,0.,0.,14000.,10100.,0.,10000.,5000.,15450.,0.,13000.,0.,10000.,3000.,11000.,0.,12220.,0.,10000.])success:Truefun:-2683.6935269049141x:array([1.22573363e+00,2.00000000e+00,1.22404780e+00,3.71739130e+00,8.25688073e-02,2.00000000e+03,0.00000000e+00,0.00000000e+00,0.00000000e+00,0.00000000e+00,0.00000000e+00,0.00000000e+00,5.00000000e+03,0.00000000e+00,0.00000000e+00,0.00000000e+00,0.00000000e+00,0.00000000e+00,0.00000000e+00,0.00000000e+00,0.00000000e+00,0.00000000e+00,0.00000000e+00,2.00000000e+03,6.39000000e+03,0.00000000e+00,0.00000000e+00,0.00000000e+00,0.00000000e+00,1.84000000e+03,5.00000000e+03,0.00000000e+00,1.00000000e+04,0.00000000e+00,0.00000000e+00,0.00000000e+00,0.00000000e+00,1.00000000e+02,0.00000000e+00,0.00000000e+00,0.00000000e+00,0.00000000e+00,0.00000000e+00,0.00000000e+00,0.00000000e+00,5.45000000e+03,0.00000000e+00,3.00000000e+03,0.00000000e+00,3.00000000e+03,0.00000000e+00,0.00000000e+00,0.00000000e+00,0.00000000e+00,0.00000000e+00,0.00000000e+00,0.00000000e+00,1.00000000e+03])message:'Optimization terminated successfully.'nit:26
PS: I also had to change the line
from .optimizeimportOptimizeResult, _check_unknown_options
to
from scipy.optimizeimportOptimizeResult
and remove the call to _check_unknown_options
in line 533 of the original code.
Post a Comment for "Scipy.optimize.linprog Unable To Find A Feasible Starting Point Despite A Feasible Answer Clearly Exists"