Gaaljh

2022-06-25

I have a multivariable function that I have defined the cost function and its gradient with respect to the variable vector. Let's call the cost function $f\left(\stackrel{\to }{x}\right)$, and variable vector $\stackrel{\to }{x}$. I have been using nonlinear conjugate gradient descenet method for minimization. Algorithm is as follows:
$k=0\phantom{\rule{0ex}{0ex}}x=0\phantom{\rule{0ex}{0ex}}{g}_{0}={\mathrm{\nabla }}_{\stackrel{\to }{x}}f\left({x}_{0}\right)\phantom{\rule{0ex}{0ex}}\mathrm{\Delta }{x}_{0}=-{g}_{0}$
${x}_{k+1}←{x}_{k}+t\mathrm{\Delta }{x}_{k}\phantom{\rule{0ex}{0ex}}{g}_{k+1}←\mathrm{\nabla }f\left({x}_{k+1}\right)\phantom{\rule{0ex}{0ex}}\mathrm{\Delta }{x}_{k+1}←-{g}_{k+1}+\gamma \mathrm{\Delta }{x}_{k}$
I know that I need to update formula so the solution "ascends" instead of descension. How can I update the algorithm? Also I wonder how the wolfe condition changes for maximization problems. Thank you and have a nice day.

Myla Pierce

Expert

Convert your function from maximization problem to minimization by setting ${f}_{new}\left(x\right)=-f\left(x\right)$ and run default conjugate gradient for minimization.

Since you had specifically asked how the formulas and wolfe conditions update you can plug in $-f\left(x\right)$ for function values and $-\mathrm{\nabla }f\left(x\right)$ for gradients and see for yourself.

Do you have a similar question?