In particular, if Euler's method is implemented on a computer, what's the minimum step size that can be used before rounding errors cause the Euler approximations to become completely unreliable? I presume it's when step size reaches the machine epsilon? E.g. if machine epsilon is e-16, then once step size is roughly e-16, the Euler approximations are unreliable.

In particular, if Euler's method is implemented on a computer, what's the minimum step size that can be used before rounding errors cause the Euler approximations to become completely unreliable?
I presume it's when step size reaches the machine epsilon? E.g. if machine epsilon is e-16, then once step size is roughly e-16, the Euler approximations are unreliable.
You can still ask an expert for help

• Questions are typically answered in as fast as 30 minutes

Solve your problem for the price of one coffee

• Math expert for every subject
• Pay only if we can solve it

escobamesmo
It depends on the problem. Basically, you have two sources of errors in each step, the rounding errors that are some multiple of the machine constant $\mu =2\cdot {10}^{-16}$ (scaled by the scale of the functions involved) and the local truncation error, which depends on derivatives of the ODE function (thus also involving the function scale) and the square ${h}^{2}$ of the step size. You want the truncation error to dominate the rounding errors, which in the simplest case demands ${h}^{2}>\mu$ or $h>\sqrt{\mu }\sim {10}^{-8}$. This changes if the ODE function is in some sense "strongly curved".