Binary File I/O
I'm making an adventure game IDE that will output a directory containing a platform-independent C/C++ source code file and all the necessary graphics files and other resources. Instead of taking up all that storage on both the hard disk and in RAM while the project is being developed, I designed the engine to be VERY modular and I'm able to store all the necessary project data in a very compact struct. When the user wants to save their project but not "compile" it into the ready-to-compile source code and graphics form, I was going to use binary I/O to write the struct to the file. When loading a project, all I would have to do would be to read in the struct. The problem is, I want this program to have as few OS limitations as possible, and while I will have to make several different versions of the IDE for all OSs, it would be nice if the project files could be saved on one system and then reopened on another, but in a book I was reading it said that binary file i/o tended to cause problems on cross-platform programs. Would this single-struct file format cause any problems? Would it be best just to write individual variables in ASCII format?
if you are careful, you can write out integers and individual fields from each struct in a specific byte ordering and avoid such platform concerns
it would likely be easier to write ascii routines, though. i recommend using "s-expressions" to write out your file data (check google)
Byte ordering.... I remember reading something about that... thanks a lot. If it works that's exactly what I want. I know MS systems "reverse" memory. Like if I wanted to write û to a file (ASCII character 150) it would write 01101001 instead of the 10010110 you'd expect by converting 150 to binary. Does this happen in Macs and unix-based systems?
idea: if there is a problem like that, I can write a function that reverses the nibbles before writing or after reading so it's always Windows-Standard, and use that function in what ever IDEs need it.
bit ordering is not important, so long as you limit all file read/write operations to work on a byte-by-byte basis. byte ordering is not determined by the operating system, but by the host computer - if you want absolute portability, you must use the same byte ordering in your file format for all systems, but have code which changes the byte ordering to the host machine when reading.
another problem is that many compilers pad structs with extra bytes to improve memory read speed. so, if you write out an entire struct, you will also write these extra padding bytes - and different compilers on different platforms may choose different padding sizes or not at all. so, you cannot conveniently use 'fwrite' to write out data if you want portability between different processors, much less compilers
argh, to clarify - you'll need to read and write individual bytes and use the shift operator to put them in their proper places
it might be possible to use network byte-ordering functions to do this for you, but i am not certain. somebody else?
Can that last problem also be solved by carefully arranging the order in which struct members are written?
edit: And to do that Il'l just write struct members, not the whole struct.
some compilers allow this:
or some similar directive to turn off packing, but i do not believe that is standard between all compilers :p
>I was going to use binary I/O to write the struct to the file.
>Would it be best just to write individual variables in ASCII format?