Gaaljh

Answered

2022-06-25

I have a multivariable function that I have defined the cost function and its gradient with respect to the variable vector. Let's call the cost function $f(\overrightarrow{x})$, and variable vector $\overrightarrow{x}$. I have been using nonlinear conjugate gradient descenet method for minimization. Algorithm is as follows:

$k=0\phantom{\rule{0ex}{0ex}}x=0\phantom{\rule{0ex}{0ex}}{g}_{0}={\mathrm{\nabla}}_{\overrightarrow{x}}f({x}_{0})\phantom{\rule{0ex}{0ex}}\mathrm{\Delta}{x}_{0}=-{g}_{0}$

${x}_{k+1}\leftarrow {x}_{k}+t\mathrm{\Delta}{x}_{k}\phantom{\rule{0ex}{0ex}}{g}_{k+1}\leftarrow \mathrm{\nabla}f({x}_{k+1})\phantom{\rule{0ex}{0ex}}\mathrm{\Delta}{x}_{k+1}\leftarrow -{g}_{k+1}+\gamma \mathrm{\Delta}{x}_{k}$

I know that I need to update formula so the solution "ascends" instead of descension. How can I update the algorithm? Also I wonder how the wolfe condition changes for maximization problems. Thank you and have a nice day.

$k=0\phantom{\rule{0ex}{0ex}}x=0\phantom{\rule{0ex}{0ex}}{g}_{0}={\mathrm{\nabla}}_{\overrightarrow{x}}f({x}_{0})\phantom{\rule{0ex}{0ex}}\mathrm{\Delta}{x}_{0}=-{g}_{0}$

${x}_{k+1}\leftarrow {x}_{k}+t\mathrm{\Delta}{x}_{k}\phantom{\rule{0ex}{0ex}}{g}_{k+1}\leftarrow \mathrm{\nabla}f({x}_{k+1})\phantom{\rule{0ex}{0ex}}\mathrm{\Delta}{x}_{k+1}\leftarrow -{g}_{k+1}+\gamma \mathrm{\Delta}{x}_{k}$

I know that I need to update formula so the solution "ascends" instead of descension. How can I update the algorithm? Also I wonder how the wolfe condition changes for maximization problems. Thank you and have a nice day.

Answer & Explanation

Myla Pierce

Expert

2022-06-26Added 20 answers

Convert your function from maximization problem to minimization by setting ${f}_{new}(x)=-f(x)$ and run default conjugate gradient for minimization.

Since you had specifically asked how the formulas and wolfe conditions update you can plug in $-f(x)$ for function values and $-\mathrm{\nabla}f(x)$ for gradients and see for yourself.

Since you had specifically asked how the formulas and wolfe conditions update you can plug in $-f(x)$ for function values and $-\mathrm{\nabla}f(x)$ for gradients and see for yourself.

Most Popular Questions