# I am reading a paper and they mention a hump function maximization: I am trying to prove the point o

I am reading a paper and they mention a hump function maximization: I am trying to prove the point of maximization:
$m=\left(1-x{\right)}^{1-\sigma }{x}^{\sigma }$ where $x,\sigma \in \left[0,1\right]$
It is said that m is a hump-shaped function of x maximized at $x=\sigma$, where $\sigma$ is a parameter that is assumed to be fixed here.

My attempt:
First derivative with respect to x: $\left(1-x{\right)}^{-\sigma }{x}^{\sigma }+\left(1-x{\right)}^{1-\sigma }{x}^{\sigma -1}=0$
$1+\left(1-x{\right)}^{1}{x}^{-1}=0$
Second derivative with respect to x: $-\left(1-x{\right)}^{-\sigma -1}{x}^{\sigma }+\left(1-x{\right)}^{-\sigma }{x}^{\sigma -1}-\left(1-x{\right)}^{-\sigma }{x}^{\sigma -1}+\left(1-x{\right)}^{1-\sigma }{x}^{\sigma -2}$
I couldn't get to the given result based on the above. Any clarification would be appreciated!
You can still ask an expert for help

• Questions are typically answered in as fast as 30 minutes

Solve your problem for the price of one coffee

• Math expert for every subject
• Pay only if we can solve it

Gornil2
If x maximizes $m$ then it will also maximize
$\mathrm{log}\left(m\right)=\left(1-\sigma \right)\mathrm{log}\left(1-x\right)+\sigma \mathrm{log}\left(x\right)$
because $\mathrm{log}$ is strictly monotonic increasing.
So let us try to find the maximum by setting $0={m}^{\prime }$ and trying to solve for $x$:
$\begin{array}{rl}0& \stackrel{!}{=}\frac{\mathrm{\partial }\mathrm{log}\left(m\right)}{\mathrm{\partial }x}\\ & =\left(1-\sigma \right)\frac{1}{x-1}+\sigma \frac{1}{x}\\ \phantom{\rule{thickmathspace}{0ex}}⟺\phantom{\rule{thickmathspace}{0ex}}0& =\left(1-\sigma \right)x+\sigma \left(x-1\right)\\ & =x-\sigma \\ \phantom{\rule{thickmathspace}{0ex}}⟺\phantom{\rule{thickmathspace}{0ex}}x& =\sigma \end{array}$
Now it should be easy to reason that $m$ is maximal at $x=\sigma$.