doubles are not twice the decimal range of floats; they extend the precision (that is, the number of significant digits that can be represented) of the real number representation. The absolute maximum and minimum values of doubles are different, but the intent is also to increase the precision with which those values can be expressed.

Here's an example program that demonstrates this:
Code:
#include <stdio.h>
int main(void)
{
  float fl = 2.0 / 3.0;
  double db = 2.0 / 3.0;

  printf("2/3 as float: %1.15f, double: %1.15f\n", fl, db);
  return 0;
}
When I compile and run this (on a mac-mini, though that doesn't really matter) I get the following:
Code:
MiniMac:~ filker0$ cc -o float float.c
MiniMac:~ filker0$ ./float 
2/3 as float: 0.666666686534882, double: 0.666666666666667
The header file /usr/include/float.h (or wherever your compiler puts its standard header files) contains the various limits for the various floating point representations supported by your compiler.