# Thread: sqrt and pow functions

1. ## sqrt and pow functions

I'd like to know what algorithms are used for the C implementations of these functions (sqrt and pow (actually I'm only interested in pow to calculate nth roots, i.e. with exponents 1/n)). I tried to find the source files that I thought must be around somewhere on my Linux system but I couldn't find them, I only found math.h.
I know that Intel provides an assembly instruction fsqrt, so maybe C doesn't even have anything to do with the algorithm but just calls this function?

2. It wouldn't surprise me if assembly instructions were used where available. You would be looking for the source of the C library glibc -- those are not installed by default IIRC (just the binaries), but if you've installed the package then they're somewhere (I don't remember where at the moment). You can also look for them online too, I suppose.

3. It doesn't look like glibc uses asm (at least not in the sqrt() function). Here is their code:
Code:
```/*********************************************************************/
/* An ultimate sqrt routine. Given an IEEE double machine number x   */
/* it computes the correctly rounded (to nearest) value of square    */
/* root of x.                                                        */
/*********************************************************************/
double __ieee754_sqrt(double x) {
#include "uroot.h"
static const double
rt0 = 9.99999999859990725855365213134618E-01,
rt1 = 4.99999999495955425917856814202739E-01,
rt2 = 3.75017500867345182581453026130850E-01,
rt3 = 3.12523626554518656309172508769531E-01;
static const double big =  134217728.0;
double y,t,del,res,res1,hy,z,zz,p,hx,tx,ty,s;
mynumber a,c={{0,0}};
int4 k;

a.x=x;
k=a.i[HIGH_HALF];
a.i[HIGH_HALF]=(k&0x001fffff)|0x3fe00000
t=inroot[(k&0x001fffff)>>14];
s=a.x;
/*----------------- 2^-1022  <= | x |< 2^1024  -----------------*/
if (k>0x000fffff && k<0x7ff00000) {
y=1.0-t*(t*s);
t=t*(rt0+y*(rt1+y*(rt2+y*rt3)));
c.i[HIGH_HALF]=0x20000000+((k&0x7fe00000)>>1);
y=t*s;
hy=(y+big)-big;
del=0.5*t*((s-hy*hy)-(y-hy)*(y+hy));
res=y+del;
if (res == (res+1.002*((y-res)+del))) return res*c.x;
else {
res1=res+1.5*((y-res)+del);
EMULV(res,res1,z,zz,p,hx,tx,hy,ty);  /* (z+zz)=res*res1 */
return ((((z-s)+zz)<0)?max(res,res1):min(res,res1))*c.x;
}
}
else {
if ((k & 0x7ff00000) == 0x7ff00000)
return x*x+x; /* sqrt(NaN)=NaN, sqrt(+inf)=+inf, sqrt(-inf)=sNaN */
if (x==0) return x; /* sqrt(+0)=+0, sqrt(-0)=-0 */
if (k<0) return (x-x)/(x-x); /* sqrt(-ve)=sNaN */
return tm256.x*__ieee754_sqrt(x*t512.x);
}
}```

5. that's complicated...
could you please tell me whether the basic algorithm behind it is the newton method or whether it is something else, which means that the C developers are cleverer than Newton, which is frightening ;-)

6. Originally Posted by 0xa4
that's complicated...
could you please tell me whether the basic algorithm behind it is the newton method or whether it is something else, which means that the C developers are cleverer than Newton, which is frightening ;-)
It is a combination of Newton's method with some bit trickery that takes advantage of IEEE floating point representation. And it's not really hard to be cleverer than Newton. The guy was genius, but he lived over 300 years ago.

When compiling with optimization, I know for certain that gcc just produces a call to the hardware fsqrt function.

I am not exactly sure how pow() is implemented, but remember that:

pow(x, n) = exp(n * log(x))

So if you have good exp() and log() implementations you can create a pow().

7. thanks for clarifying the thing with the optimization, the information on pow also helps a lot

btw. I also think that people got cleverer in the last 300 years but apparently Newton's method (or it's core) still runs on modern computers. Only when it becomes obsolete then we can say for sure that we're cleverer now (or rather the inventor).