Actually i don't know how to distinguish maximization and minimization of nonlinear function with Ne

Leah Pope 2022-06-28 Answered
Actually i don't know how to distinguish maximization and minimization of nonlinear function with Newton Raphson Method.

What i know is, for finding the optimization points, we have to calculate this iteration:
x i + 1 = x i [ H f ( x i ) ] 1 f ( x i )
Then, what is actually the difference between maximization and minimization using this method?
You can still ask an expert for help

Expert Community at Your Service

  • Live experts 24/7
  • Questions are typically answered in as fast as 30 minutes
  • Personalized clear answers
Learn more

Solve your problem for the price of one coffee

  • Available 24/7
  • Math expert for every subject
  • Pay only if we can solve it
Ask Question

Answers (2)

Trey Ross
Answered 2022-06-29 Author has 30 answers
Newton-Raphson is based on a local quadratic approximation. The iterate moves to the optimum of the quadratic approximation. Whether you minimize or maximize does not depend on the iteration calculation (you cannot modify it to turn minimization into maximization or vice versa) but on the shape of the approximation. The approximation is convex when H f ( x i ) is positive semidefinite (psd), and concave when H f ( x i ) is negative semidefinite (nsd). When H f ( x i ) is psd, you expect to move to a local minimum, whereas when it is nsd,you expect to move to a local maximum.

The easiest way to think about this is for functions R R , so let's take f ( x ) = x 3 . At x = 1 the local quadratic approximation is g ( x ) = 1 + 3 ( x 1 ) + 3 ( x 1 ) 2 which is convex. So if you perform an iteration of Newton raphson, you move to the minimum of g and you hope to find a minimum of f.

On the other hand, if you start at x = 1, the local quadratic approximation is g ( x ) = 1 + 3 ( x + 1 ) 3 ( x + 1 ) 2 , which is concave. So if you perform an iteration of Newton raphson, you move to the maximum of g and you hope to find a maximum of f.

If the definiteness of H f does not agree with your goal (e.g., H f is nsd but you want to minimize), then a quadratic approximation is not useful. It might be better to switch to other methods such as gradient descent.

We have step-by-step solutions for your answer!

Yahir Crane
Answered 2022-06-30 Author has 9 answers
Suppose we want to find the x ^ R k that maximizes the (twice continuously) differentiable function f : R k R .

Well
f ( x + h ) a + b h + 1 2 h C h
where a = f ( x ) , b = f ( x ) and C = D 2 f ( x ).

Note that C will be symmetric. This implies
f ( x + h ) b + C h .
Thus the first order condition for a maximum is
0 = b + C h ^
which implies that
h ^ = C 1 b
In other words, the vector that maximizes the second order Taylor approximation to f at x is
x + h ^ = x C 1 b = x ( D 2 f ( x ) 1 ) f ( x )
Which I am sure you can relate to your initial formula above.

We have step-by-step solutions for your answer!

Expert Community at Your Service

  • Live experts 24/7
  • Questions are typically answered in as fast as 30 minutes
  • Personalized clear answers
Learn more

You might be interested in

asked 2022-06-25
I am very new to optimization andI have to solve this equation

max U(k)=tlog(1+ yhpg/(pg+s))+mpge^(-ky)) st k>0

can anyybody give me idea where should I start
asked 2022-06-21
I'm having trouble understanding how to check the second order conditions for my unconstrained maximization problem.

This is the entire problem: Alicia wants to maximize her grade, which is a function of the time spent studying ( T) and the number of cups of coffee ( C) she drinks. Her grade out of 100 is given by the following function.
G ( T , C ) = 50 + 10 T + 16 C ( T 2 + 2 T C + 2 C 2 )
In the first order conditions, I find the partial derivatives and set them equal to zero. I get the following two equations:

10 2 T 2 C = 0 and 16 2 T 4 c = 0. The first equation was the partial derivative with respect to T and the second equation was the partial derivative with respect to C. Solving these two equations, I find that C = 3 and T = 2.

Now, I need to check the second order conditions. I know that the second partial derivative with respect to both T and C should be negative. This checks out. I get -2 from the first equation (with respect to T) and I get -4 from the second equation (with respect to C). The last thing I need to do with the second order condition is multiply these two together (which yields 8) and then subtract the following:
( δ 2 G δ T δ C ) 2
Please forgive me if this formula isn't displaying correctly. I tried using the laTex equation editor, but I'm not sure if it worked. Anyway, I need to know how to derive this. What is it asking for? I know that this part should be -2 squared, which is 4. Then, 8-4=4, which is positive and tells me that the second order conditions are met.

But where is the -2 coming from? I know within both of the equations, there are a few -2's. But, I'm not sure exactly where this -2 comes from.
asked 2022-06-26
Edit: Removed solved in title, because I realize I need someone to check my work.
Ok, so the problem is a lot more straight forward than I originally approached it (which was a false statement -- so it was excluded).
Let R,S, x N with x R*S and 0 0 < R S. Next, define B as a multiplicative factor of x - c with c 0 0 and B S such that x c B = A R and A N. What value of B maximizes A?
asked 2022-05-09
Show that f ( x 1 , . . . x n ) = max { f ( x 1 , . . . , x n ) : ( x 1 , . . . , x n ) Ω } if and only if f ( x 1 , . . . x n ) = min { f ( x 1 , . . . , x n ) : ( x 1 , . . . , x n ) Ω }
I am not exactly sure how to approach this problem -- it is very general, so I can't assume anything about the shape of f. It seems obvious that flipping the max problem with a negative turns it into a min problem. Thoughts?
asked 2022-06-23
I'd like to compute
A = a r g m a x A R d × k A T A = I det ( A T Λ A )
where k d, Λ = diag ( λ 1 , , λ d ) and λ 1 λ d 0.

I strongly suspect that A = [ ϵ 1 ϵ k ], but after various attempts to prove it I gave up.
asked 2022-05-08
Which are the most famous problems having an objective of maximizing a nonlinear convex function (or minimizing a concave function)? As far as I know such an objective with respect to linear constraints is np-hard.
asked 2022-05-02
I maximized the function
f ( x , y ) = ( x + 2 ) ( y + 1 )
subject to
4 x + 6 y = 130 , x , y 0
and I found ( x , y ) = ( 16 , 11 ). But the question I'm doing asks me to analyse the "sufficient conditions of second order".

So, I thought I should use some theorem that tells me (16,11) is the max value of f. I don't know if I could use Weierstrass' Theorem here and I believe the hessian matrix doesn't help me either.

New questions