In particular, if Euler's method is implemented on a computer, what's the minimum step size that can be used before rounding errors cause the Euler approximations to become completely unreliable?
I presume it's when step size reaches the machine epsilon? E.g. if machine epsilon is e-16, then once step size is roughly e-16, the Euler approximations are unreliable.
I presume it's when step size reaches the machine epsilon? E.g. if machine epsilon is e-16, then once step size is roughly e-16, the Euler approximations are unreliable.