An upper bound for the global error in Euler's method to solve a first order ODE numerically is given by the equation

$\frac{Mh}{2L}({e}^{L({t}_{i+1}-a)}-1),$

where ${t}_{i+1}$ is the $i+1$ cell in the abcsisa and $a$ is the first abcisa. M is a bound for the second derivative of the ODE unknown $y$ and $L$ is the Lipschitz continuity property bound. This is derived in most of numerical analysis books.

My question is the following: Having $L$ small is good for convergence and stability (right?) in the sense that $L$ small implies little change. If $L=0$ then the function $f(t,y)$ (on the right hand side of the ODE) is constant on y. If $L<1$ the function is contracting. So why is $L$ in the denominator of the bound? My intuition (which is not working good) says that the error bound should diminish as $L$ diminshes.

What is wrong with my intuition here?

$\frac{Mh}{2L}({e}^{L({t}_{i+1}-a)}-1),$

where ${t}_{i+1}$ is the $i+1$ cell in the abcsisa and $a$ is the first abcisa. M is a bound for the second derivative of the ODE unknown $y$ and $L$ is the Lipschitz continuity property bound. This is derived in most of numerical analysis books.

My question is the following: Having $L$ small is good for convergence and stability (right?) in the sense that $L$ small implies little change. If $L=0$ then the function $f(t,y)$ (on the right hand side of the ODE) is constant on y. If $L<1$ the function is contracting. So why is $L$ in the denominator of the bound? My intuition (which is not working good) says that the error bound should diminish as $L$ diminshes.

What is wrong with my intuition here?