I am trying to solve functional maximization problems. They are typically of the following form...
I am trying to solve functional maximization problems. They are typically of the following form (where support of is [0,1]):
Now one way that was proposed to me was of point-wise maximization. That is you fix a and then solve:
Solving this problem would give me a number for each and I will recover a function that will maximize the original objective function.
I have two questions related to this:
1) Does such point-wise maximization always work?
2) What happens if rather than doing point-wise maximization I try and take the derivative of the objective function with respect to x(θ) and equating the first order condition to 0? Is this a legitimate way of solving the problem? Can someone show exactly what such a derivative would look like and how to compute it?