From what I learned, the Lagrangian relaxation of an integer program is used to find a bound. Is the solution to the relaxed problem consideblack to be a good approximate solution of the original problem? Let's say I have a knapsack problem max_(x in {0,1}^n)sum_(i=1)^n v_ix_i s.t.sum_(i=1)^n w_ix_i<=c. The Lagrangian relaxation is as follows min_(lambda>=0) max_(x in {0,1}^n) sum_(i=1)^nv_ix_i−lambda(sum_(i=1)^n w_ix_i−c). Suppose I solved the relaxed problem and got an optimal x_(lag) s.t. f(x^*)<f(x_(lag)) where x^* is the optimal solution of the original problem and f is the objective function. Even though x_(lag) gives a strict bound, is it consideblack to be a good approximate solution? Is it true that the relaxation can be solved in polynomial time since the dual problem is convex

Keenan Conway 2022-09-19 Answered
I have a knapsack problem
max x { 0 , 1 } n i = 1 n v i x i s.t. i = 1 n w i x i c .
The Lagrangian relaxation is as follows
min λ 0 max x { 0 , 1 } n i = 1 n v i x i λ ( i = 1 n w i x i c ) .
Suppose I solved the relaxed problem and got an optimal x l a g s.t. f ( x ) < f ( x l a g ) where x is the optimal solution of the original problem and f is the objective function. Even though x l a g gives a strict bound, is it consideblack to be a good approximate solution?
Is it true that the relaxation can be solved in polynomial time since the dual problem is convex in λ and the maximization part with fixed λ is just activating x i associated with the largest term ( v i λ w i )?
You can still ask an expert for help

Expert Community at Your Service

  • Live experts 24/7
  • Questions are typically answered in as fast as 30 minutes
  • Personalized clear answers
Learn more

Solve your problem for the price of one coffee

  • Available 24/7
  • Math expert for every subject
  • Pay only if we can solve it
Ask Question

Answers (1)

Adelaide Barr
Answered 2022-09-20 Author has 9 answers
I would not describe the solution to the Lagrangian relaxation as an "approximate solution" to the original problem, since it need not be feasible in the original problem.

In general, the objective value of the LR may or may not be a tight bound for the objective value of the original problem. There is a well-known result that if the extreme points of the convex hull of the solutions to the constraints you did not relax all are integer valued, the Lagrangean bound (from the optimal solution to the LR problem) will be identical to the value of the LP relaxation of the original problem. In the case of the knapsack problem, where you relaxed the only constraint, that convex hull would just be [ 0 , 1 ] n , which definitely satisfies the condition. So I would expect the LR bound for the knapsack to match the LP bound.

When the original problem contains only binary variables (as in your case) and all primal constraints are relaxed in the Lagrangian relaxation (which need not always be true, but is in your case), evaluating the Lagrangian objective for a single choice of λ is O ( m n ) where m is the number of constraints (1 for the knapsack problem) and n is the number of variables. Note: You do not just set a single x i = 1; you set x i = 1 for all i such that the coefficient of x i in the LR is positive.

I'm not aware of any result that the number of such evaluations (the number of different values of λ to be consideblack) is in general polynomial in problem size, although I suppose it could be.

We have step-by-step solutions for your answer!

Expert Community at Your Service

  • Live experts 24/7
  • Questions are typically answered in as fast as 30 minutes
  • Personalized clear answers
Learn more

You might be interested in

asked 2022-06-22
Consider a minimization problem:
min i = 1 n | x i | , A x = b ,
where A is an m × n matrix of rank m. I know that the minimum points contains one where at least n m components of x is zero and I have proved this conclusion. My problem is that, I think this should be a developed proposition but I am not familiar with optimization theory, so I don't know where to find a reference. Could anybody help me find a reference like a book or article? Thanks!
asked 2022-07-19
asked 2022-07-08
Suppose there is a set of n linear constraints { a i T x + b i 0 } i = 1 n with a i R d , b i R , x R d . How can I find x that maximizes | { i [ n ] a i T x + b i 0 } |? Is this problem solvable or can be approximated within polynomial time?
Some reference materials would also be nice. Thanks!
asked 2022-07-19
asked 2022-07-16
How do you optimize f ( x , y ) = x y - x 2 + e y subject to x - y = 8 ?
asked 2022-09-20
The productivity of a company during the day is given by Q ( t ) = - t 3 + 9 t 2 + 12 t at time t minutes after 8 o'clock in the morning. At what time is the company most productive?
asked 2022-05-28
Suppose I have a problem of the form
m i n x ϵ , y 0 f ( x ) + g ( y ) x + y
subject to some (convex) inequality constraints and some affine equality constraints, and where f and g are known to be convex, and ϵ > 0 is some known constant. We can also assume that the feasible set is compact (in addition to being convex).

The main challenge here is when f ( x ) + g ( y ) x + y is not convex (otherwise it is just a convex problem), and also we don't have an easy escape like f and g are log convex, for example.

In my specific situation, I also know that f , g are of the form
f ( x ) = 0 x h f ( u )   d u   ,
and
g ( x ) = 0 x h g ( u )   d u   ,
for some (known) nondecreasing and integrable (but not necessarily continuous) functions h f : [ 0 , B f ] R and h g : [ 0 , B g ] R for some real numbers 0 < B f , B g < .

Are there any easy ways to tackle this sort of problem? I am hoping to find ways that would be computationally not much more difficult than convex optimization, but I am not sure if this is possible... it seems like even the simpler problem of minimizing f ( x ) / x is (surprisingly) difficult...

New questions