Hi all
I'm currently looking at different structs that can represent an IP header. Now I found the following two things:
As one can see, the first version uses bitfields to access the IP-version and IP header length. It also seems to care about Big/Little Endian.Code:/* * From /usr/include/netinet/in.h */ struct iphdr { #if __BYTE_ORDER == __LITTLE_ENDIAN unsigned int ihl:4; unsigned int version:4; #elif __BYTE_ORDER == __BIG_ENDIAN unsigned int version:4; unsigned int ihl:4; #else # error "Please fix <bits/endian.h>" #endif ... ... } /* * From "ip.h" of the tcpdump source */ struct ip { u_int8_t ip_vhl; /* header length, version */ #define IP_V(ip) (((ip)->ip_vhl & 0xf0) >> 4) #define IP_HL(ip) ((ip)->ip_vhl & 0x0f) ... ... }
The second uses a u_int8_t to access the byte holding both header length and version and separates the two via the '&' and bitshifting.
The second one makes sense to me, it doesn't care about byte order, why should it? But why does the first version has to care about byte order? Isn't it true that anyway the first four bits are the IP version, the next 4 bits are IP header length?...
Rafael