Mh/2L (e^(L(t_(i+1)−a))−1), Having L small is good for convergence and stability (right?) in the sense that L small implies little change. If L=0 then the function f(t,y) (on the right hand side of the ODE) is constant on y. If L<1 the function is contracting. So why is L in the denominator of the bound? My intuition (which is not working good) says that the error bound should diminish as L diminshes.

An upper bound for the global error in Euler's method to solve a first order ODE numerically is given by the equation
$\frac{Mh}{2L}\left({e}^{L\left({t}_{i+1}-a\right)}-1\right),$
where ${t}_{i+1}$ is the $i+1$ cell in the abcsisa and $a$ is the first abcisa. M is a bound for the second derivative of the ODE unknown $y$ and $L$ is the Lipschitz continuity property bound. This is derived in most of numerical analysis books.

My question is the following: Having $L$ small is good for convergence and stability (right?) in the sense that $L$ small implies little change. If $L=0$ then the function $f\left(t,y\right)$ (on the right hand side of the ODE) is constant on y. If $L<1$ the function is contracting. So why is $L$ in the denominator of the bound? My intuition (which is not working good) says that the error bound should diminish as $L$ diminshes.

What is wrong with my intuition here?
You can still ask an expert for help

• Questions are typically answered in as fast as 30 minutes

Solve your problem for the price of one coffee

• Math expert for every subject
• Pay only if we can solve it

broeifl
First this bound is not a good estimation. It is useful to say the error is bounded, but in practice it is completely inaccurate. Then the error does not explode when $L\to 0$. You can develop $\frac{1}{L}\left({e}^{L\left({t}_{i+1}-a\right)}-1\right)=\frac{1}{L}\left(1+L\left({t}_{i+1}-a\right)+O\left({L}^{2}\right)-1\right)={t}_{i+1}-a+O\left(L\right)$. So you see that the error diminishes as $L$ tends to 0, which is logical.
Did you like this example?
Alisa Taylor
didn't think it would be like that, thanks