How do computers calculate the log of a value?
I'm not sure if this question belongs on StackOverflow or here (please let me know if the former, and i'll delete this and ask there), but I was wondering how the log or ln of a value is calculated computationally with accuracy? Is some series implemented that approximates a value?
I looked into the Taylor series for the natural logarithm, but that is apparently only accurate for 0 < x < 2, and I can't find anything else. I tried looking for source code for Java's Math#log function as well to see what algorithm they implemented, but couldn't find any since it's implemented in a native language rather than in Java.