Hi,

I'm working with a very large number of doubles, each with a value between 0 and around 10. I'm not sure of the best implementation to get around floating point precision so that I can take a good average over all of the values. I was thinking of sorting the values and then breaking them up, but I don't know of a good way to find roughly what the float is before my computer spits out inf.

This is what I've tried so far:

sSize can hopefully be around 10,001 and is the same as 1+ num points generated. When I exceed 64bits with accuracy only to the 1000th's, I get 00000000 for everything. I was thinking perhaps of chunking every 100 or 1000 or something like that into an array and then averaging those by dividing each entry by the number of total entries, but I feel like there is a more portable and flexible solution.Code:void *func = &dComp; qsort(moment,sSize,sizeof(double),func); //compute average unsigned long long int sum=0; for(i=0;i<sSize;i++) { sum=sum + rint(100*moment[i]); } double fSum = (1.0*sum)/100.0;