I've been working on really really nice c serializer that's just about done. I slightly re-invented the wheel because I didn't like my options, and they didn't have some features I needed.
Anyway, what I'm trying to wrap my head around now is dealing with cases where the size of a type on the source (serializing) machine is larger or smaller than the same type on an other machine. And actually - I can't really think of a case where this could be a problem because of the way I've implemented the interfaces - I use all standard types: uint8_t, uint16_t. Which in theory should guarantee that that type is n bits right?
If you wouldn't mind taking a look at some code that would be awesome.
I was thinking of storing type size information in the serialized stream but I think a better way would be to define my own internal types that I use instead of the uint8_t types - so that I can use macros and check the available type sizes and redefine my own internal types to make sure that they're big enough.. meaning something like this:
#if UINT8_MAX == XXX
typedef gw_uint8_t uint8_t
I'm thinking if I do this then when it's built on another machine I can control what gets used in the typedefs to make sure the sizes are at least n bits. or maybe I don't even need to worry about this now? does using standard uint8_t types alleviate you from having to do this?
thanks for any help or suggestions!