I don't know anything by the name "Newton's Square Root Theory." I
wonder if you can be thinking about the fact that the ancient "divide-
and-average" algorithm for approximating sqrt(a) is in fact just
Newton's method, applied to the function f(x) = x^2 - a?
First, here's the divide-and-average algorithm.
Suppose that you wish to calculate the square root of a number A. The
divide-and-average algorithm is:
1. Choose a rough approximation G of sqrt(A).
2. Divide A by G and then average the quotient with G, that is,
calculate:
G* = ((A/G)+G)/2
3. If G* is sufficiently accurate, stop. Otherwise, let G = G* and
return to step 2.
Here's an example: To calculate the sqrt(2), choose G = 1.5.
G* = (2/1.5 + 1.5)/2 = 1.41666666666
G* = (2/1.41666666666+1.41666666666)=1.41421568628
G* = (2/1.41421568628+1.41421568628)=1.41421356238
G* = (2/1.41421356238+1.41421356238)=1.41421356238
The number of correct decimal places more or less doubles with each
repetition of step 2.
Secondly, Newton's method is a method in calculus for determining a
zero of a function. Suppose f has a zero near a; then if we set
x_1 = a and define:
x_{n+1} = x_n - f (x_n)/f'(x_n), n = 1, 2, 3, ...
in many cases the sequence x_1, x_2, ... will converge to the zero
near a.
It turns out that if f(x) = x^2 - a and we take x_1 = a/2, then
Newton's method is the divide-and-average algorithm.