# Thread: Problem printing a PI

1. ## Problem printing a PI

First I must say that I'm new to C programming, a second sorry on my bad english!

My problem is that I try to print more then 1 milion didgits of PI. I'm using Gauss-Legendre algorithm that is used in program Super_pi. I was using Google for some time now, and only that I could find is that I need to use some non standard library! Any help is useful! Code is attached.

Code:
```#include <stdio.h>
#include <math.h>

int main(){

int i;
float a = 1;
float b = a/sqrt(2);
float t = a/4;
float x = a;
float y;
float p;
for (i = 1; i <= 19; i++)
{
y = a;
a = (a+b)/2;
b = sqrt(b*y);
t = t-x*(a-y)*(a-y);
x = 2*x;
}

p = (a+b)*(a+b)/(4*t);
printf("%.1000000f",p);

getch();
return 0;
}```

2. What exactly is your problem? I haven't bothered memorizing a million digits of PI, so you'll want to actually tell us what's wrong. You'll also be interested to know that floating point numbers are not accurate, so you'll likely have problems there too.

Quzah.

3. Problem is that first 16 numbers have some value 0-9 then the rest are zeros.

4. Maybe if you used discriptive variable names, I would be able to figure out what your doing. But as quzah said, the is no built in type to store 1 million digits of pi. I'd suggest that you simply print out each diget as you find it, but I don't know if that would work with your algorithm

5. Originally Posted by King Mir
the is no built in type to store 1 million digits of pi.
That is my problem, can this be solved in any way?!

Originally Posted by King Mir
Maybe if you used discriptive variable names
I used one from algoritham, its easyer for me! Here is original algoritham!

Gauss-Legendre algorithm

1. Initial value setting;

a=1
b=1/sqrt(2)
t=1/4
x=1

2. Repeat the following statements until the difference of a and b is within the desired accuracy;

y=a
a=(a+b)/2
b=sqrt(b*y)
t=t-x*(y-a)^2
x=2*x

3. Pi is approximated with a, b and t as;

PI=(a+b)^2/(4*t)

The algorithm has second order convergent nature. Then if you want to calculate up to n digits, iteration count of the order log2 n is sufficient. E.g. 19 times for 1 million decimal digits, 31 times for 3.2 billon decimal digits.