You're right, I probably should explain some of the context of the problem. The function is the constructor for a two-dimensional texture in an OpenGL utility library.
There are a whole lot of arbitrarily #define-d integer constants that are used.
func( type_a arg_a, type_b arg_b, int arg_c, type_d arg_d, type_e arg_e, type_f arg_f, int arg_g=8 );
//type_a = NULL, SDL_Surface*, float*, double*, unsigned char*, char*, std::string, std::string*
//type_a is the data field. One ought to be able to pass NULL to it, (or completely omit it
//for the same result). SDL_Surface* points to a data surface, while float*, double*, and
//unsigned char* point to actual data. char*, std::string, and std::string* are
//filenames where data can be loaded from.
//type_b = int, int*
//This is the rect argument, which specifies how the rectangular data is to be
//interpreted. It can either be a flag (int) or a pointer to a 4-element array defining
//type_c = int
//A format flag, specifying how OpenGL should use the data
//type_d = int, bool
//This is the minification filter argument. It can either be a flag (int) or a bool.
//Because the flags are arbitrarily defined, I'm hesitant to just cast the bool to an int.
//Although I could probably ensure that the 0 and 1 values are not any meaningful flag
//in the context of this function, it feels like a source of trouble later.
//type_e = int, bool
//As argument d, but the magnification filter argument.
//type_f = NULL, int*
//The colorkey argument. Can be NULL (or omitted entirely with the same result). If it
//is included, the int* points to an array defining a four component color
//type_g = int
//The bit precision of the texture OpenGL should create