Is there any reason to use say, a "GLfloat" vs. just a normal "float"? So far I've just been using standard C/C++ types, but realized that there is probably a reason for OpenGL types. Which is best, when, and why?
Is there any reason to use say, a "GLfloat" vs. just a normal "float"? So far I've just been using standard C/C++ types, but realized that there is probably a reason for OpenGL types. Which is best, when, and why?
Programming Your Mom. http://www.dandongs.com/
OpenGL types are guarenteed to be a certain size, native types arn't.
ex: a GLuint will be 4 bytes on every platform, but the size of an unsigned int could vary.
Thanks for the clarification.
Programming Your Mom. http://www.dandongs.com/
Because OpenGL is cross-platform, they try to keep it that way by doing what Perspective said.