Hey everyone. I'm a Computers Engineering student on my first semester and I'm scratching my head over an issue I'm having.
On the current C exercises I have, two of them are compiling differently than they should, although the code seems right. And when I say differently than they should, when I compile them on the Uni's UNIX server over the net using GCC, they run perfectly. When I do the compiling using MinGW and GCC locally at home in Windows, they're faulty.
Here is one of those programs in question:
Code:
#include <stdio.h>
main()
{
float vathmoi[10], avg, max;
int i, num, sum, count;
sum = 0;
for (i=0;i<=9;i++)
{
do
{
printf("Enter student's grade: ", i + 1);
scanf("%f", &vathmoi[i]);
printf("%d %f\n", (int)(vathmoi[i] * 10), (vathmoi[i] * 10));
}
while ((int)(vathmoi[i] * 10) != (vathmoi[i] * 10) | vathmoi[i] < 0 | vathmoi[i] > 10);
sum = sum + vathmoi[i];
}
avg = sum / 10.0;
count = 0;
max = 0;
for (i=0;i<=9;i++)
{
if (vathmoi[i] >= avg) count++;
if (vathmoi[i] > max) max = vathmoi[i];
}
printf("\n\n%d students are over the average.", count);
printf("\nMax Grade: %.1f. Students: ", max);
for (i=0;i<=9;i++)
{
if (vathmoi[i] == max) printf("%d ", i + 1);
}
printf("\n");
}
So, simple program, asks for 10 students' grades, and only sets them into the array if they're valid, that is, if they're between 0 and 10 and have only one digit past the decimal point. Since we're still on pretty basic stuff, the "only one digit" is determined via the multiplication by 10 and comparing the integer of that with the float. So for example, 4.6 * 10 = 46.00000 and the integer is 46, so accepted, while 4.65 * 10 = 46.50000 and the integer is 46, so denied.
And this works perfectly if I use the uni's GCC compiler via an SSH shell. However, if I use C-Free or Code::Blocks or any other C Editor with the MinGW GCC compiler, the small debug part that prints (int)(vathmoi * 10) and (vathmoi * 10) shows that something is completely wrong!
For example, if I give 4.6 as input for a grade, it says that (int)(vathmoi * 10) is 45 because (vathmoi * 10) is 45.999999!
Can someone explain why this happens? Is there some basic explanation on why I can't get the program to work right on my computer, while running it over the net on the UNIX server works fine?