Large file i/o
What is the quickest way to read/write a file, especially large amounts of data? The files in question can be as small as several bytes, or as large as >1GB. The data in the files are of type unsigned char, are contiguous, and are acted upon individually (i.e., read one byte from input_file, do something to that byte, write resulting byte to output_file, repeat).
Reading and writing one byte at a time is simple, but I'd imagine that a billion calls to istream::read and ostream::write is a bit overkill. On the other hand, reading the entire file into memory would probably be a bad idea, too.
Perhaps there's a happy medium? Perhaps there's a better alternative to fstream? Platform independence is not terribly important, though I do need to be able to do this in both Windows and Linux, and I'm willing to make two separate implementations if there's a better platform-dependent way to do it in each.
Any help would be greatly appreciated. Thanks in advance.
Using platform-specific function will be faster if anything is indeed faster. For example, read() and write() under Linux. Do a board/MSDN search, perhaps, for the equivalent Windows functions.
The C functions like getc() might be faster too, I don't know. You'd have to do some experimenting. They don't involve stream objects, and getc() is actually implemented as a macro, so I would think it might be faster.
I think that's going to be really slow. A call to read() hits the kernel, which means two context switches. Calling read() once for every single byte is about as inefficient as it gets.
Originally Posted by dwks
What he needs is a buffering layer... Like iostreams :-)
I think the assumption that iostreams isn't going to be fast enough, may be unfounded. I think he should write it using standard functions and see if it's good enough.