Find an optimal solution to a linear program in O(nlogn)

Raiden Barr

Raiden Barr

Answered question

2022-10-23

Find an optimal solution to a linear program in O(nlogn)
Given c R + n , a R + n and γ R + ,, design an algorithm which, in O(nlogn) operations, computes the optimal solution x∗ to the following linear program :
max c T x  s.t.  a T x γ 0 x i 1 , i [ n ]
You may assume that a set of n real numbers can be sorted in time O(nlogn) and that each arithmetic operation takes constant time.
I tried multiple things without much success.
1. Start with a feasible solution, for example 0 T and add ( 1 / 2 ) k to each coordination at each of the k iterations until the constraints aren't satisfied anymore. I'm not sure this solution would be in O(nlogn).
2. Compute the dual of the LP, we know by strong duality that the objective value of the optimal solution of the dual is equal to the objective value of the optimal solution of the primal. However I'm not sure how solving the dual would be any easier.
3. Lastly I tried to see if the simplex method combined with an adequate sorting would work but the simplex algorithm is in polynomial time.

Answer & Explanation

Immanuel Brennan

Immanuel Brennan

Beginner2022-10-24Added 9 answers

Step 1
Think of it this way. Each unit of x i has a benefit c i and a cost a i , and you have a total budget of γ.
Step 2
You should buy as much as possible of the x i 's with the highest values of c i / a i .
ndevunidt

ndevunidt

Beginner2022-10-25Added 5 answers

Step 1
If you assume not merely that   c T x < c T y  , but that y∗ is also the optimal solution (which must exist, because the objective function is continuous, and the set of feasible points is compact) you can get your contradiction by showing that the value of the objective can be improved by modifying y∗ . Taking your y j 0 > 0  , for instance, with   j 0 { i k + 2 , , i n }  , we must have   x j 0 = 0  , and therefore   y i j < x i j 1   for some   j = 1 , 2 , , k + 1   (because   i = 1 n a i y i min ( γ , i = 1 n a i ) = i = 1 n a i x i  . So now, if we decrease   y j 0   by   δ a j 0  , and increase   y i j   by   δ a i j  , where   δ = min ( a j 0 y j 0 , a i j ( 1 y i j ) )  , then all the constraints will still be satisfied, and the objective function will increase by   ( c i j a i j c j 0 a j 0 ) δ  , which is positive, from the way   c i j a i j   are ordered, and this contradicts the supposed optimality of y∗ .
Step 2
There's still a good deal of i-dotting and t-crossing necessary to complete the proof, and it's probably easier to prove it by showing that the dual:
Minimise λ 0 γ + i = 1 n λ i subject to λ 0 a T + λ T c T and λ i 0 for  i = 0 , 1 , , n
has a feasible solution:
λ 0 = c i k + 1 a i k + 1 λ i j = c i j a i j c i k + 1 a i k + 1 for  j = 1 , 2 , , k λ i j = 0 for  j = k + 1 , k + 2 , , n
with   λ 0 γ + i = 1 n λ i = c T x   , which implies that both   λ   and x∗ are optimal for their respective programs.

Do you have a similar question?

Recalculate according to your conditions!

Ask your question.
Get an expert answer.

Let our experts help you. Answer in as fast as 15 minutes.

Didn't find what you were looking for?