I have some code similar to the following. I have highlighted the expression in question.
Code:
#include <stdio.h>
#define ADDRESS 0x400
struct sType1
{
unsigned char c;
unsigned int i;
} Object1 = {5,0};
struct sType2
{
unsigned char a;
unsigned char b;
unsigned char d[50];
unsigned int e;
} Object2;
int foo(void)
{
unsigned int addr1, addr2 = ADDRESS + (sizeof(Object2) * (Object1.c - 1));
addr1 = Object1.c - 1; /* implicit cast of char */
addr1 *= sizeof(Object2);
addr1 += ADDRESS;
return addr1 == addr2;
}
int main ( void )
{
printf("foo() = %d\n", foo());
return 0;
}
The problem I have encountered is that the compiler generates code to do a csmul, which I take to mean a signed character multiply. In this example, it assigns addr2 with a value of 0x3D8. The correct value should be 0x4D8, and is correctly calculated (for the values of Object1.c that I've tested) when broken into smaller, simpler expressions.
In stddef.h, size_t is typedefed as an unsigned int. And the platform has 8-bit bytes and 2-byte ints. The question I have is whether or not the compiler is incorrect here.
Code:
unsigned int addr2 = ADDRESS + (sizeof(Object2) * (Object1.c - 1));
I would have expected that the result of the highlighted subexpression should have been an int. But even if the result were an unsigned char, this result multiplied by the size_t result should have been promoted to unsigned int(?). Or since the sizeof result was a constant in this expression, does the compiler have some leeway in this kind of expression?
If the compiler is correct, I would welcome any explanation from those who understand the "usual arithmetic conversions" better than I can read from the standard.