I am trying to solve the following question: <mtext>Maximize&#xA0;</mtext> f ( x

Lexi Chandler

Lexi Chandler

Answered question

2022-04-07

I am trying to solve the following question:
Maximize  f ( x 1 , x 2 , , x n ) = 2 i = 1 n x i t A x i + i = 1 n 1 j > i n ( x i t A x j + x j t A x i )
subject to
x i t x j = { 1   i = j 1 n   i j
where xi's are column vectors ( m × 1 matrices) with real entries and A is an m × m ( ( n < m )) real symmetric matrix.

From some source, I know the answer as
f max = n + 1 n i = 1 n λ i ,
λ i being the eigenvalues of A sorted in non-increasing order (counting multiplicity). But I am unable to prove it. I will appreciate any help (preferably with established matrix inequality, or Lagrange's multiplier method).

Answer & Explanation

Kharroubip9ej0

Kharroubip9ej0

Beginner2022-04-08Added 10 answers

Why Lagrange multipliers? Your maximization problem can be solved in a rather straightforward manner using some standard tricks in matrix theory. Let e = 1 n ( 1 , , 1 ) T R n and X = ( x 1 , , x n ) M m , n ( R ) (hence X is "tall" and X T is "wide"). The problem can be formulated as maximizing
f ( X ) = tr ( X T A X ) + n e T X T A X e = tr ( ( I + n e e T ) X T A X )
subject to the constraint X T X = n + 1 n I e e T .
The eigenvalues of X T X are n + 1 n (with multiplicities n 1) and 1 n (with e being an eigenvector). Pick any orthogonal matrix V whose last column is e. Then every X that satisfies X T X = n + 1 n I e e T can be written as X = U Σ V T , where U is some m × m orthogonal matrix, Σ is the m × n (tall) diagonal matrix with diagonal ( n + 1 n , , n + 1 n , 1 n ) , and V is an n × n orthogonal matrix. Now let e n = ( 0 , , 0 , 1 ) T R n . Then
Σ V T ( I + n e e T ) V Σ T = Σ [ I + n ( V T e ) ( e T V ) ] Σ T = Σ ( I + n e n e n T ) Σ T = diag ( n + 1 n , , n + 1 n n  entries ,   0 , , 0 m n  entries ) = D (say) .
Hence
f ( X ) = tr ( ( I + n e e T ) X T A X ) = tr ( ( I + n e e T ) V Σ T U T A U Σ V T ) = tr ( Σ V T ( I + n e e T ) V Σ T U T A U ) = tr ( D U T A U )
and the maximum of f occurs when U T A U is a diagonal matrix whose diagonal entries are in descending order. Thus the answer follows.
Autumn Pham

Autumn Pham

Beginner2022-04-09Added 3 answers

This is not an answer but a reformulation of the question. As robjohn stated, the question becomes: Maximize f : M m × n R such that

f ( X ) = tr ( X T A X ) + u T A u R
This can be combined into
tr ( X T A X ) + tr ( u T A u ) = tr ( [ X u ] T A [ X u ] ) = tr ( ( I 1 ) T X T A X ( I 1 ) )
where I is the identity matrix and 1 is the all-ones vector.

Do you have a similar question?

Recalculate according to your conditions!

New Questions in High school geometry

Ask your question.
Get an expert answer.

Let our experts help you. Answer in as fast as 15 minutes.

Didn't find what you were looking for?