Are they just figured out using the same mechanism as a linear search or is the range narrowed down somehow, similar to a binary search.
The implementation of a function such as the natural logarithm in any decent math library will keep the error below an ulp (unit of least precision). The goal of the implementor of a math library function is to find some optimal approximation, one that achieves the desired accuracy with as few calculations as possible. Taylor series are typically a poor choice because far too many terms are needed to achieve the desired accuracy.
The typical weapons of choice are to reduce the range from all representable real numbers to some very small region, and then use some optimal approximation that yields an accurate approximation of the desired function over this narrow range. The typical weapons of choice for this optimal approximation are a polynomial or a rational polynomial (ratio of two polynomials). The implementation just contains the polynomial coefficients. Those coefficients are constructed by some optimizing technique such as the Remes Exchange algorithm.
In the case of the natural logarithm, there is an easy way to reduce the range. Real numbers are almost universally represented in terms of a mantissa and an exponent: x=m*2p, where p is an integer and m is between 1 and 2. Thus log(x) = log(m)+p*log(2). The latter term, p*log(2), is just a multiplication by a known constant. The problem thus reduces to finding the logarithm of a number between 1 and 2 (or between 1/2 and 1). A further range reduction can be made by using the fact that √2 is logarithmically in the middle of [1,2). Thus all that is needed is a way to calculate the logarithm of a number between 1 and √2. This is typically done with a rational polynomial. The ratio of a second order polynomial polynomial to a third order works quite nicely for this.