I am completely new to C and am working my way through a book (The C Programming Language). I have was messing with changing some of the example code and came across some "strange" default values for arrays.

When I declare an array, I'm expecting all the values to zero. However, when I loop through, some have a negative value, I am curious as to why.

So the code is:

Code:
#include <stdio.h>

main()
{
	int i, ndigit[10];
	
	printf("A: \n0: %d\n", ndigit[0]);
	printf("1: %d\n", ndigit[1]);
	printf("2d: %d\n", ndigit[2]);
	printf("2f: %f\n", ndigit[2]);
	printf("3: %d\n", ndigit[3]);
	printf("4: %d\n", ndigit[4]);
	printf("5: %d\n", ndigit[5]);
	printf("6: %d\n", ndigit[6]);
	printf("7: %d\n", ndigit[7]);
	printf("8: %d\n", ndigit[8]);
	printf("9: %d\n", ndigit[9]);

	for (i = 0; i < 10; ++i)
	{
		ndigit[i] = 0;
	}
	
	printf("B: \n0: %d\n", ndigit[0]);
	printf("1: %d\n", ndigit[1]);
	printf("2: %d\n", ndigit[2]);
	printf("3: %d\n", ndigit[3]);
	printf("4: %d\n", ndigit[4]);
	printf("5: %d\n", ndigit[5]);
	printf("6: %d\n", ndigit[6]);
	printf("7: %d\n", ndigit[7]);
	printf("8: %d\n", ndigit[8]);
	printf("9: %d\n", ndigit[9]);
}
which outputs:

Code:
A: 
0: 0
1: 0
2d: -1881143876
2f: -0.000000
3: 0
4: 0
5: 0
6: 0
7: 0
8: 0
9: -1881139893
B: 
0: 0
1: 0
2: 0
3: 0
4: 0
5: 0
6: 0
7: 0
8: 0
9: 0
I am obviously expecting the B output to all be 0 as I explicitly zero every value. However, why is A2 -1881143876 output as an integer, but -0.000000 as a float. And why is A9 a different value?

This is compiled on Mac OSX.

Thanks.