<mtable displaystyle="true"> <mlabeledtr> <mtd id="mjx-eqn-Problem_1"> <mte

Rebecca Villa 2022-07-01 Answered
(Problem 1) maximize | 2 x 1 3 x 2 | subject to 4 x 1 + x 2 4 2 x 1 x 2 0.5 x 1 , x 2 0
(Problem 2) minimize | 2 x 1 3 x 2 | subject to 4 x 1 + x 2 4 2 x 1 x 2 0.5 x 1 , x 2 0
According to the question, one of them can be rewritten as a linear program (LP), and the other one cannot. The question asks the reader to determine which one can be rewritten as an LP, and why the other one cannot.

I've never converted to an LP an optimization problem whose objective function contains an absolute value. So, I think I have no way of determining the one that can be rewritten as an LP. But, I know the simplex method and I can solve an LP using it.
You can still ask an expert for help

Expert Community at Your Service

  • Live experts 24/7
  • Questions are typically answered in as fast as 30 minutes
  • Personalized clear answers
Learn more

Solve your problem for the price of one coffee

  • Available 24/7
  • Math expert for every subject
  • Pay only if we can solve it
Ask Question

Answers (1)

Aryanna Caldwell
Answered 2022-07-02 Author has 11 answers
For problem 2: replace the objective function by a new variable t 0, and add the constraints
{ 2 x 1 3 x 2 t 2 x 1 + 3 x 2 t
These constraints are equivalent to t 2 x 1 3 x 2 t, and so the absolute value is taken into account.

Since you are minimizing t, you are minimizing 2 x 1 3 x 2 indeed.

It is not difficult to see that this does not work if you are maximizing t.

We have step-by-step solutions for your answer!

Expert Community at Your Service

  • Live experts 24/7
  • Questions are typically answered in as fast as 30 minutes
  • Personalized clear answers
Learn more

You might be interested in

asked 2022-05-02
Maximize
x 2 x 1 + y 1 y 2
given that x 1 2 + y 1 2 = 1 and x 2 2 + y 2 2 = 1.
I was thinking about using Lagrange multipliers, but I only know how that works for a 3-variable function, not 4. Could someone please suggest a way to solve this? Maybe with Lagrange multipliers or some more elementary method?
asked 2022-07-14
I would like to maximize the function:
1 2 i = 1 N | x i 1 N |
under the constrains i = 1 N x i = 1 and i ( 1 , . . . , N )
I have done some test for small values of N and I have the feeling that the solution is 1 1 N but I can't figure out how to solve it analytically.
asked 2022-07-28
Consider the primes arranged in the usual order 2, 3, 5, 7... It is conjectublack that beginning with 3, every other prime can be composed of the addition and subtraction of all smaller primes (and 1), each taken once. For example:
3 = 1 + 2
7 = 1 - 2 + 3 + 5
13 = 1 + 2 - 3 - 5 + 7 + 11 = -1 + 2 + 3 + 5 - 7 + 11
A. Show that this also holds for 19.
B. Show that this doesn't hold for the first "inbetween" prime 5
asked 2022-07-05
I was just wondering what the steps one would take to maximize the utility of a function of the form U(X,Y) = min{X,Y} + X subject to income I = p x X + p y Y where p x is the price of X and p y is the price of Y. I understand that normally this would be solved using the Lagrangian method but obviously you cannot differentiate this particular function.
asked 2022-04-06
Given matrix-valued function A : R d R n × n and (tall) matrix B R n × m , where n > m, let matrix-valued function C : R d R m × m be defined by
C ( x ) := B T A ( x ) B
I would like to maximize the determinant of C ( x ) without directly considering C ( x ). If I maximize the determinant of A ( x ), does that maximize the determinant of C ( x ) as well?

If it were m = n, then we would have:
det ( C ) = det ( B ) 2 det ( A )
Hence, maximising det ( A ) would maximize as a consequence det ( C ). Does it hold something similar in the case with n > m? Or is it possible to define some lower bound for det ( C )? And if so, how to prove it?

If in the previous general case it doesn't hold, maybe it could help that, in my specific case C and A are two positive definite matrices and B is a sparse matrix of zeros and ones. A is also block diagonal. The only way I came up with is using Cauchy-Binet for the determinant of the product of rectangular matrices but I remained stuck with a summation involving the principal minors of the Cholesky factorization of A times minors in B.
asked 2022-07-03
I am reading a paper and they mention a hump function maximization: I am trying to prove the point of maximization:
m = ( 1 x ) 1 σ x σ where x , σ [ 0 , 1 ]
It is said that m is a hump-shaped function of x maximized at x = σ, where σ is a parameter that is assumed to be fixed here.

My attempt:
First derivative with respect to x: ( 1 x ) σ x σ + ( 1 x ) 1 σ x σ 1 = 0
1 + ( 1 x ) 1 x 1 = 0
Second derivative with respect to x: ( 1 x ) σ 1 x σ + ( 1 x ) σ x σ 1 ( 1 x ) σ x σ 1 + ( 1 x ) 1 σ x σ 2
I couldn't get to the given result based on the above. Any clarification would be appreciated!
asked 2022-07-12
I am an economist with some math background but not strong enough to solve this. I'm trying to solve:
max x   f ( x , y ( x ) ) = a y ( x ) + b g ( x )
where
y ( x ) = 1 ( h ( x ) > 0 ) .
h ( x ) is a linear function of x and a , b R are constants. x X and X is a compact and continuous subset of [ 0 , 1 ]. g ( x ) is a concave and differentiable function.

Now, my problem is that the optimal choice of x depends on the value of y( x), which depends on x.

I would like to know how to solve for, if possible, a closed-form solution. If it is impossible, what kind of algorithm should I use, and where can I find the essential readings to learn them.

EDIT: Thanks Robert for his great answer. Now I would just like to ask this follow-up question:

Can this method of solving for the maximum be adopted in a dynamic programming version of this problem? Say I want to maximise V ( x t ) = max x t f ( x t , y t ( x t ) ) + δ V ( x t + 1 ) but the constant a now becomes a function of x t 1 ? δ is a discount factor.

New questions