hello all, how can i determine which kind of cpu is being used in a system ?
how should i go about it ?
by teh way im looking for a portable solution meaning my source code can be compiled and executed on all platforms .
tanx in advance
hello all, how can i determine which kind of cpu is being used in a system ?
how should i go about it ?
by teh way im looking for a portable solution meaning my source code can be compiled and executed on all platforms .
tanx in advance
Uh, well first of all, it's 'endian'. Anyway, easiest way is to cast the address of an integer to that of a char, and then check for a specific bit pattern, eg:
Code:
inline bool is_little_endian( void )
{
static size_t const
value = 1;
static bool const
result = *reinterpret_cast< char const* >( &value ) == 1;
return result;
}
inline bool is_big_endian( void )
{
return !is_little_endian( );
}
Ahh, my bad ! thanks for saying that .
anyway, would you explain that bit of code alittle bit ? well im kinda confused !
i understand that we are trying to assign value's address (which would be an integer like 0x1245 for example ) to another variable! so that we can check its first bit (out of the 2 bit representing high and low value (precise and non-precise) for every variable in memory)
.well i dont know how to access that so called hight portion of a variable ! and how it is done in this example of yours!
Well just to be sure we're on the same page here, 'endian' is a matter of byte (not bit) order. Anyway, the basic idea is this: First we declare some type of integer larger than a byte. On most systems, size_t is sufficient. So let's say the variable has a hexidecimal value of 0x12345678 located at the address 0x40000000. Now, if we're running on a little-endian machine, the layout of the variable in memory is going to be:
[0x40000000] 0x78
[0x40000001] 0x56
[0x40000002] 0x34
[0x40000003] 0x12
So if we convert the base address of this 4-byte integer to a char* and dereference it, we will simply be extracting the byte at 0x40000000. If we check this byte and it turns up to be 0x78, then we know that the least significant byte came first, hence little-endian.
bye the way i think i have some more qustions to ask ,
first of all whats the point in declaring those variables static ? is it important to do so ? or no its not crucial ?
and why did you used const ? is it good programming practice or no it is meant to be there! ?
and how can i for example see the third byte of a 4 Byte (size_t) variable ?we just did it for the first Byte , how can we extend this functionalty to other bytes of a variable ?
is the procedure the same ? or no it differs completely .
many thanks in advance
>> whats the point in declaring those variables static
A static variable is only initialized once, so it's just more efficient to do do it that way. Not necessarily crucial, though.
>> and why did you used const ? is it good programming practice
Basically, yes.
>> how can i for example see the third byte of a 4 Byte (size_t) variable ?
Just cast the address of the variable to a char* and then access it like an array, eg: ptr[ 2 ].
You're welcome.
sorry to bother , but well i have another question , it just crossed my head!
well they say that in emulation , it is vital to know what kind of cpu you are emulating(target system cpu) and what cpu you are using for emulation ( your own cpu)
tell me how this scenario is correct in this case :
an emulator tries to run another system's codes,
for this it receives the instructions and then convert them into the machine language and then
stores them in the emulated ram "( here an array ;e.g 2d array !)
and then reads form that emulated ram (array)
so how is it going to be different on different cpu's ? i mean we are the one who map the opcodes in the correct order into the memory! not our cpu!
im kinda confused! how this high/low endian has to do with emulation!
Mostly it is about how data is stored in the "rom".
Say that you have an instruction that is supposed to be 0x12345678 (4 bytes).
If we read this into an int, on a little endian machine, we will get the wrong result because the least significant byte is first and on little endian, it comes last.
So basically we must know what type of endianess-format the target cpu is and what endianess the processor that does the emulation has, so that you do not screw up the information order.
thank you very much .
another question :
does different cpu manufacturers which utilize different architectures,use different cpu endianness-format ? or no they use a specific format for desktops and another one for lets say mainframes ?
i mean AMD cpus Vs Intel's .
they use different architecture i think , so are they different in endianness too?
All x86 cpus are little endian. There is a reason for that, of course. Otherwise all applications would stop working.
Other architectures does not need to use little endian, however.
thank very much you Elysia .
hmm! well it seems my questions today are not finishing!
im stuck here! i dont know how to get it to work!
i wrote
now what! how am i supposed to derefrence it ?Code:size_t value = 0x12345678;
size_t * integer_pointer = &value;
char * char_pointer = reinterpret_cast<char *>(integer_pointer);
im sure it is not sth like
i tried to reinterpret it again as ( int * ) to get the answer but again it fails!Code:cout<<"the first byte is "<<*char_pointer !
again it doesnt give me 78!Code:cout<<"the first Byte is "<<*reinterpret_cast< size_t * > (char_pointer);
(im trying to get to know the reinterpret_cast<>() a bit more , and specially to be able to reference different bytes of a variable through that char pointer ! thats why im doing it this way! dont know what else i can do for this kind of stuff!)
and by the way i encountered a weird issue!
this code gives me almost always the correct answer! but sometimes when i change the value it gives me nonsense ! why ? what could be the cause ?Code:size_t const result = *reinterpret_cast< char const* > (& Variable_X);
Output:Code:#include <iostream>
int main()
{
int x = 0x12345678;
char* p = reinterpret_cast<char*>(&x);
std::cout << std::hex << "0x" << (int)p[0] << "\n0x" << (int)p[1] << "\n0x" << (int)p[2] << "\n0x" << (int)p[3] << std::endl;
if (p[0] == 0x78 && p[1] == 0x56 && p[2] == 0x34 && p[3] == 0x12)
std::cout << "Congratulations! Your machine is little endian!\n";
else if (p[0] == 0x12 && p[1] == 0x34 && p[2] == 0x56 && p[3] == 0x78)
std::cout << "Congratulations! Your machine is big endian!\n";
else
std::cout << "Weird. Your machine is neither little endian nor big endian.\n";
}
0x78
0x56
0x34
0x12
Congratulations! Your machine is little endian!
Press any key to continue . . .
Thanks a million Elysia .
so basically we should use static_cast <>() after converting the pointer.
so that it would give us the value stored in the new string !
i thought our new character pointer actually contains an address!
so even if i convert that into integer! it is still an address! how does it give me the first value though!
suppose we had an address of 0X00000002
now we use the following command to convert it to a char pointer
so now our char pointer contains the Address! of the variable !Code:int const variable = 0x12345;
char const* char_pointer = reinterpret_cast<char const *>(&variable)
i think it should be sth like for example these below:
char_pointer[0]=0x00000002;
char_pointer[1]=0x00000004;
char_pointer[1]=0x00000006;
char_pointer[1]=0x00000008;
so in order to get the value stored in Byte we need to go to first location which is stored in char_pointer[0]! but how is it possible without any kind of dereferencing ?
would anyone explain this to me ?
im trying to understand reinterpret_cast<>() stuff!
and the latest part really confused me !
i though we'd need to use reinterpret_cast<> again to be able to see the information we want! but it seems i was wrong.
thank you all in advance
Maybe this will make it more clear:
So you see, array indexing *is* dereferencing a pointer.Code:#include <iostream>
int main( void )
{
using namespace
std;
typedef unsigned char
byte;
typedef byte*
pbyte;
typedef void*
pvoid;
size_t
value = 0x12345678;
pbyte
ptr = pbyte( &value ),
seq = ptr;
for( size_t index = 0; index < sizeof( value ); ++index, ++seq )
{
cout << "[" << pvoid( &ptr[ index ] ) << "] " << pvoid( ptr[ index ] ) << endl;
// ...same as...
cout << "[" << pvoid( ptr + index ) << "] " << pvoid( *( ptr + index ) ) << endl;
// ...same as...
cout << "[" << pvoid( seq ) << "] " << pvoid( *seq ) << endl;
}
return 0;
}
Also, the reason why the cast from char to void* is necessary is simply because when an istream encounters a char or unsigned char type, it prints it's ASCII value.
thank you sebastioni
really tanx
so the c++ way of doing it would be sth like
the catch here is that when dereferencing our char pointer , we need to make the compiler aware that we need the value! so we tell him! not to convert them to ascii!Code://in the name of GOD
//CPU Analyzer
#include <iostream>
using std::cin;
using std::cout;
using std::endl;
using std::dec;
using std::hex;
int main()
{
size_t value;
char * char_pointer ;
cout<<"Please Enter your number (in hex)";
cin>>hex>>value;
char_pointer = reinterpret_cast<char *>(&value);
cout<<"first byte in memory "<<reinterpret_cast<void*>(char_pointer[0]);
return 0;
}
and we do this by using either (void*) or (int*) in reinterpret_cast<>()
. thats great .
thank you so much guys .
Let me explain to you in more details how this works.
Do not cast a byte value into a pointer.Code:#include <iostream>
int main()
{
// The memory layout we will be examining.
int x = 0x12345678;
// Convert address to a char* pointer so we can examine byte-by-byte
char* p = reinterpret_cast<char*>(&x);
// p[0] dereferences the pointer and fetches the byte at the current position that the char pointer points to.
// Similarly, p[n] fetches the nth byte from the position of the pointer p.
// Since it is a char* pointer, when dereferencing the pointer, we get a char.
// Naturally, since char is a character, std::cout will print out the character it represents.
// To avoid this, we cast it to int to make sure it prints out the actual byte.
// Coupled with std::hex prints out the hex presentation of the number.
// This is only necessary when printing.
std::cout << std::hex << "0x" << (int)p[0] << "\n0x" << (int)p[1] << "\n0x" << (int)p[2] << "\n0x" << (int)p[3] << std::endl;
// This is the same as above. We dereference the pointer and checks its value.
// Note that we do not need to cast it here. A char contains an integer like anything else, but
// std::cout will interpret it as a character, so that is the reason we must cast in order to avoid that.
if (p[0] == 0x78 && p[1] == 0x56 && p[2] == 0x34 && p[3] == 0x12)
std::cout << "Congratulations! Your machine is little endian!\n";
else if (p[0] == 0x12 && p[1] == 0x34 && p[2] == 0x56 && p[3] == 0x78)
std::cout << "Congratulations! Your machine is big endian!\n";
else
std::cout << "Weird. Your machine is neither little endian nor big endian.\n";
}
now its plain as day xD ;)Quote:
p[0] dereferences the pointer and fetches the byte at the current position that the char pointer points to.
Similarly, p[n] fetches the nth byte from the position of the pointer p.
Since it is a char* pointer, when dereferencing the pointer, we get a char.
Naturally, since char is a character, std::cout will print out the character it represents.
To avoid this, we cast it to int to make sure it prints out the actual byte.
Coupled with std::hex prints out the hex presentation of the number.
This is only necessary when printing.
Thanks alot Elysia , really tanx! now fully understand that stuff . :cool: