I am trying to solve functional maximization problems. They are typically of the following form (whe

Semaj Christian

Semaj Christian

Answered question

2022-06-24

I am trying to solve functional maximization problems. They are typically of the following form (where support of θ is [0,1]):
[ v ( θ , x ( θ ) ) + u ( θ , x ( θ ) ) u 1 ( θ , x ( θ ) ) ( 1 F ( θ ) f ( θ ) ) ] f ( θ ) d θ
Now one way that was proposed to me was of point-wise maximization. That is you fix a θ and then solve:
a r g m a x x ( θ ) v ( θ , x ( θ ) ) + u ( θ , x ( θ ) ) u 1 ( θ , x ( θ ) ) ( 1 F ( θ ) f ( θ ) )
Solving this problem would give me a number x for each θ and I will recover a function x ( θ ) that will maximize the original objective function.

I have two questions related to this:
1) Does such point-wise maximization always work?
2) What happens if rather than doing point-wise maximization I try and take the derivative of the objective function with respect to x(θ) and equating the first order condition to 0? Is this a legitimate way of solving the problem? Can someone show exactly what such a derivative would look like and how to compute it?

Answer & Explanation

Brendon Fernandez

Brendon Fernandez

Beginner2022-06-25Added 14 answers

Regarding your first question, pointwise maximization works as long as the boundaries of the integral are not a function of your control.
Regarding your second question, no you cannot, precisely because you are solving a functional maximization problem. You do not know if x ( θ ) is continuous, differentiable, etc. For instance, imagine x ( θ ) = 1 , d. Then taking the derivative of the objective function with respect to x ( θ ) does not make any sense.

Do you have a similar question?

Recalculate according to your conditions!

New Questions in High school geometry

Ask your question.
Get an expert answer.

Let our experts help you. Answer in as fast as 15 minutes.

Didn't find what you were looking for?