hat the plus 4 means in the last part
how it change the number of cells in the "i" row??
Code:int **arr;
arr=(int**)malloc(... )
for i=0..
arr[i]=(int*)malloc(sizeof(int)+4);
Printable View
hat the plus 4 means in the last part
how it change the number of cells in the "i" row??
Code:int **arr;
arr=(int**)malloc(... )
for i=0..
arr[i]=(int*)malloc(sizeof(int)+4);
You should *REALLY* learn to read manuals. Why don't you try searching for example for "man malloc" with google?
i know that if its *4 then
4 cells
what if +4
??
read the documentation of malloc as I said. It tells what the sole argument of malloc means.
i know what it means .
i know that +4 means adding 4 bits
how it change the number of cells in the "i" row??
It does not add 4 bits. It adds 4 bytes.
And you get the answer when you think how many bytes one item in your array needs.
integer is 8 bytes
so it doesnt add any new cells??
there's not a fixed size for int. It's usually 4, but can be different in other systems. That's why sizeof() exists, it's a function that returns the size (in bytes) of a data type.
It's 8 bytes on a 64-bit system. It's 4 bytes on a 32 bit system.Quote:
integer is 8 bytes
so it doesnt add any new cells??
We can't really tell you if it adds new cells. The call just returns a pointer to a certain area in memory equal to the size you specified. It's impossible to tell if that's what you want and what you mean by adding cells - because it depends on how you work with that area of memory.
Why don't you, by the way? You've been asked this many times, and you seem to make no improvement towards helping yourself.Quote:
You should *REALLY* learn to read manuals.
Actually, integer is standardized to 4 bytes. But pointer to integer may be 4 or 8 bytes (or something else) depending on the machine's architecture.
Not standardized... Just common. Most compilers I've used on x86-64, for instance, default to a 32-bit integer type (you must specify "long long" to get 64 bits) but this is not specified by the standard.
Other 64-bit architectures I've worked with default to 64-bit integers. Sometimes the compiler lets you select.
You should not assume anything, especially when somebody can change it via compiler switch.
Doesn't that standard say that int is the natural word size of the processor? In other words, a pointer and an int should always be the same size? And a 64-bit compiler should always make a 64-bit int?
That's what I understood, but it sounds like I'm mistaken?
I won't say a word about standard anymore ;) But that really is not the case. I've been doing some hell of a porting work to get pointer arithmetics working after we changed from 32bit arch to 64 bit arch... (there were pointers casted to ints, and then calculated some offsets && casted them back. Also we had some ID numbers for things, and actually those IDs were then used as memory locations... A mess...
No, I don't believe so. The only thing regarding size are the minimum and minimum-maximum values. It also defines the "at least" size of each type (i.e.: how many bits they must be at least). For example, char must be at least 8 bits wide; a short must be at least 16 bits wide, etc.
Quzah.
Additionally, most programs use integer values which easily fit into a 16-bit variable, much less a 32-bit one. To have the integer type default to 64-bits would, in most cases, just be a huge waste of space.
People failing to use the optimal data type isn't the fault of the language however.
Quzah.
The standard does not say so, no. It gives a minimum size of int, and a minimum size of long. But both can be the same size, and also be smaller than a pointer - this is exactly what you get in a MS compiler for x86-64. On the other hand, Linux x86-32 & -64 compilers will have "long" the size of pointers.
The size of int is determined by several things: It's normally a "natural" size for the processor. In x86-64, the "natural" size is 32, since that gives the shortest code-form. Using 64-bit integers will require a size-prefix to the instruction at least, and setting a 64-bit integer is (sometimes) 4 bytes longer than the 32-bit version of the same instruction, as well as the integer space used by a 64-bit integer is of course twice as large, so any array of int would be twice as large for example - this affects memory and cache-performance by "less values for the same amount of memory" - so it takes twice as long to process, even if the instructions are identical in time. And for most things, as long as int is bigger than a few million, everythng is fine. Making a bigger integer will not improve anything, and it will take up more space.
I think this is about the 10th time I'm writing this in some thread.
--
Mats