I want to solve the following optimization problem <mtext>maximize&#xA0;</mtext> f (

Carina Valenzuela 2022-05-08 Answered
I want to solve the following optimization problem
maximize  f ( X ) = log d e t ( X + Y ) a T ( X + Y ) 1 a subject to  X W ,
where the design variable X is symmetric positive semidefinite, W , Y are fixed symmetric positive semidefinite matrices, and a is a given vector. The inequality X W means that X W is symmetric positive semidefinite.

I was wondering if there's hope to find an analytical solution to either the constrained or unconstrained problem. And if there is none, could I use convex optimization techniques to solve this numerically?
You can still ask an expert for help

Expert Community at Your Service

  • Live experts 24/7
  • Questions are typically answered in as fast as 30 minutes
  • Personalized clear answers
Learn more

Solve your problem for the price of one coffee

  • Available 24/7
  • Math expert for every subject
  • Pay only if we can solve it
Ask Question

Answers (1)

Raiden Williamson
Answered 2022-05-09 Author has 14 answers
Make the substitution Z = ( X + Y ) 1 X = Z 1 Y, then the problem becomes convex:
maximize  log d e t ( Z ) a T Z a subject to  0 Z ( Y + W ) 1
Not exactly what you’re looking for?
Ask My Question

Expert Community at Your Service

  • Live experts 24/7
  • Questions are typically answered in as fast as 30 minutes
  • Personalized clear answers
Learn more

Relevant Questions

asked 2022-04-06
Given matrix-valued function A : R d R n × n and (tall) matrix B R n × m , where n > m, let matrix-valued function C : R d R m × m be defined by
C ( x ) := B T A ( x ) B
I would like to maximize the determinant of C ( x ) without directly considering C ( x ). If I maximize the determinant of A ( x ), does that maximize the determinant of C ( x ) as well?

If it were m = n, then we would have:
det ( C ) = det ( B ) 2 det ( A )
Hence, maximising det ( A ) would maximize as a consequence det ( C ). Does it hold something similar in the case with n > m? Or is it possible to define some lower bound for det ( C )? And if so, how to prove it?

If in the previous general case it doesn't hold, maybe it could help that, in my specific case C and A are two positive definite matrices and B is a sparse matrix of zeros and ones. A is also block diagonal. The only way I came up with is using Cauchy-Binet for the determinant of the product of rectangular matrices but I remained stuck with a summation involving the principal minors of the Cholesky factorization of A times minors in B.
asked 2022-05-07
Some have proposed that for a natural centrality measure, the most central a node can get is the center node in the star network. I've heard this called "star maximization." That is, for a measure M ( ), and a star network g with center c ,
{ ( c , g ) } arg max ( i , g ) N × G ( N ) M ( i , g )
where N is the set of nodes and G considers all unweighted network structures.

I'd like to learn about some centrality measures that don't satisfy this property, but "star maximization" isn't a heavily used term, so I am having trouble in searching for many such measures. What are some such measures of centrality?
asked 2022-05-02
I maximized the function
f ( x , y ) = ( x + 2 ) ( y + 1 )
subject to
4 x + 6 y = 130 , x , y 0
and I found ( x , y ) = ( 16 , 11 ). But the question I'm doing asks me to analyse the "sufficient conditions of second order".

So, I thought I should use some theorem that tells me (16,11) is the max value of f. I don't know if I could use Weierstrass' Theorem here and I believe the hessian matrix doesn't help me either.
asked 2022-05-09
Show that f ( x 1 , . . . x n ) = max { f ( x 1 , . . . , x n ) : ( x 1 , . . . , x n ) Ω } if and only if f ( x 1 , . . . x n ) = min { f ( x 1 , . . . , x n ) : ( x 1 , . . . , x n ) Ω }
I am not exactly sure how to approach this problem -- it is very general, so I can't assume anything about the shape of f. It seems obvious that flipping the max problem with a negative turns it into a min problem. Thoughts?
asked 2022-05-15
I need to prove the maxima of the following summation, using Lagrange.
max x m ( m a m l o g ( x m ) )
0 x m 1
m x m = 1
The solution is a closed form x m = a m m a m .

I formulated the Lagrange equation but I am confused about the signs and the multipliers.

L ( x , λ , μ ) = m a m l o g ( x m ) + m λ m ( 1 x m ) + μ ( m x m 1 ), Is this formulation correct ? what is wrong ?

note: only one μ for one constraint.
asked 2022-05-14
I have a non-linear maximization problem and I want to convert it to be a minimization problem, can I do so by multiplying it by a negative sign, or is that wrong; and if that is wrong what should I do?
asked 2022-05-10
Consider that I have two items. Let P i ( x i ) denote the profit I get from item i by taking x i units of it (assume that x i can be a real number). Let W i ( x i ) denote the weight consumed by item i when I take x i units of this. Assume that both profit and weight functions are monotonically increasing in the quantity of items picked. The problem I am interested in is
max x 1 , x 2   P 1 ( x 1 ) + P 2 ( x 2 ) s . t .       W 1 ( x 1 ) + W 2 ( x 2 ) W
where W is total weight allowed. Thus I need to find x i (quantity of item i) such that the total profit is maximized while keeping the total weight under a given quantity. Let x 1 and x 2 be the solution to this problem. Is it true that
P 1 ( x 1 ) W 1 ( x 1 ) = P 2 ( x 2 ) W 2 ( x 2 )
Note that ratio is essentially "Profit Per unit weight of an item". Assume that this were different at the optimum and the first item had the larger ratio. Thus I can get more profit per unit weight out of the first item. Then increasing quantity of that should increase profit. In order to balance the weights, I decrease the second items quantity decreasing its profit as well. But since the increase in profits from item 1 is higher, this should compensate for loss from item 2 and make the profits even more higher. What is flawed with this logic?