
Strange... it can't convert INT_MIN but it can convert INT_MIN+1... also it can convert INT_MAX.
Code:
char* myitoa(int number,char* buffer,int radix){
int n,num;
bool negative=false;
char temp[32];
radix%=37;
if(buffer==NULL){
return NULL;
}
if(number==0){
buffer[0]='0';
buffer[1]='\0';
return buffer;
}
if(number<0){
if(radix==10){ negative=true; }
number*=1;
}
for(n=0;number>0;n++){
num=number%radix;
temp[n]=num+0x30;
if(num>9){
temp[n]+=7;
}
number/=radix;
}
if(!n){ temp[n]='\0';n++; }
if(negative){
temp[n]='';
n++;
}
for(int i=0;i<n;i++){
buffer[n1i]=temp[i];
}
buffer[n]='\0';
return buffer;
}

This is precisely why I think C++0x should include rudimentary support for multiple precision arithmetic.

>Strange... it can't convert INT_MIN but it can convert INT_MIN+1... also it can convert INT_MAX.
Think about it for a moment. On your machine, INT_MAX is probably 2,147,483,647 and INT_MIN is 2,147,483,648. On a two's complement machine, the smallest negative number is one greater than the largest positive number. It's easy to just remove the sign and work with numbers from 2,147,483,647 to 2,147,483,647, but that last negative value won't play well because you can't represent it using a signed integer type.
One viable solution is to use an unsigned variable as a temporary. Since you assume unsigned except in one special case anyway, and that special case has no need to use the number in a signed sense, it's a quick and easy fix. Off the top of my head, your code could be changed like so:
Code:
char* myitoa(int number,char* buffer,int radix){
unsigned x = number;
int n,num;
bool negative=false;
char temp[32];
radix%=37;
if(buffer==NULL){
return NULL;
}
if(number==0){
buffer[0]='0';
buffer[1]='\0';
return buffer;
}
if(number<0){
if(radix==10){
negative=true;
x = number;
}
}
for(n=0;x>0;n++){
num=x%radix;
temp[n]=num+0x30;
if(num>9){
temp[n]+=7;
}
x/=radix;
}
if(!n){ temp[n]='\0';n++; }
if(negative){
temp[n]='';
n++;
}
for(int i=0;i<n;i++){
buffer[n1i]=temp[i];
}
buffer[n]='\0';
return buffer;
}

You should assert() that buffer is not null, to catch such errors in debug builds while not losing performance in release builds.
In my opinion, you should also require that the buffer size be passed, and check that you're not overflowing. This is the idea behind MS's new s_* functions, and while I hate their execution, the idea is sound.
You might say that it's easy to pass a buffer that's long enough because the size of a converted integer is known at compiletime. But what if the radix is a runtime value? What if the program needs to be ported from, say, 16bit ints to 32bit ints? Will the buffer still be large enough?
The way negatives are handled, being ignored in any base but 10, I find counterintuitive and seriously weird, especially as your function does not even output the real bit pattern in that case, but instead what the bit pattern would be if the parameter was not negative. I'd get rid of it and always handle them, simply by prepending a minus sign.
The internal temp buffer would be too small on systems with more than 32 bits in int. True, these are probably rare, but you never know. (Of course, on such platforms your ASCIIbound digit generation will probably fail, too.) Also, if you handle negatives consistently, it is too small now.
You should calculate its size like so:
Code:
#include <climits>
...
char temp[sizeof(int) * CHAR_BITS + 1];