Large file i/o

This is a discussion on Large file i/o within the C++ Programming forums, part of the General Programming Boards category; What is the quickest way to read/write a file, especially large amounts of data? The files in question can be ...

  1. #1
    Registered User
    Join Date
    Aug 2006
    Posts
    43

    Question Large file i/o

    What is the quickest way to read/write a file, especially large amounts of data? The files in question can be as small as several bytes, or as large as >1GB. The data in the files are of type unsigned char, are contiguous, and are acted upon individually (i.e., read one byte from input_file, do something to that byte, write resulting byte to output_file, repeat).

    Reading and writing one byte at a time is simple, but I'd imagine that a billion calls to istream::read and ostream::write is a bit overkill. On the other hand, reading the entire file into memory would probably be a bad idea, too.

    Perhaps there's a happy medium? Perhaps there's a better alternative to fstream? Platform independence is not terribly important, though I do need to be able to do this in both Windows and Linux, and I'm willing to make two separate implementations if there's a better platform-dependent way to do it in each.

    Any help would be greatly appreciated. Thanks in advance.

  2. #2
    Frequently Quite Prolix dwks's Avatar
    Join Date
    Apr 2005
    Location
    Canada
    Posts
    8,048
    Using platform-specific function will be faster if anything is indeed faster. For example, read() and write() under Linux. Do a board/MSDN search, perhaps, for the equivalent Windows functions.

    The C functions like getc() might be faster too, I don't know. You'd have to do some experimenting. They don't involve stream objects, and getc() is actually implemented as a macro, so I would think it might be faster.
    dwk

    Seek and ye shall find. quaere et invenies.

    "Simplicity does not precede complexity, but follows it." -- Alan Perlis
    "Testing can only prove the presence of bugs, not their absence." -- Edsger Dijkstra
    "The only real mistake is the one from which we learn nothing." -- John Powell


    Other boards: DaniWeb, TPS
    Unofficial Wiki FAQ: cpwiki.sf.net

    My website: http://dwks.theprogrammingsite.com/
    Projects: codeform, xuni, atlantis, nort, etc.

  3. #3
    Captain Crash brewbuck's Avatar
    Join Date
    Mar 2007
    Location
    Portland, OR
    Posts
    7,274
    Quote Originally Posted by dwks View Post
    Using platform-specific function will be faster if anything is indeed faster. For example, read() and write() under Linux. Do a board/MSDN search, perhaps, for the equivalent Windows functions.
    I think that's going to be really slow. A call to read() hits the kernel, which means two context switches. Calling read() once for every single byte is about as inefficient as it gets.

    What he needs is a buffering layer... Like iostreams :-)

    I think the assumption that iostreams isn't going to be fast enough, may be unfounded. I think he should write it using standard functions and see if it's good enough.

Popular pages Recent additions subscribe to a feed

Similar Threads

  1. Newbie homework help
    By fossage in forum C Programming
    Replies: 3
    Last Post: 04-30-2009, 05:27 PM
  2. Formatting the contents of a text file
    By dagorsul in forum C++ Programming
    Replies: 2
    Last Post: 04-29-2008, 01:36 PM
  3. Game Pointer Trouble?
    By Drahcir in forum C Programming
    Replies: 8
    Last Post: 02-04-2006, 02:53 AM
  4. File I/O problem
    By 81N4RY_DR460N in forum C++ Programming
    Replies: 12
    Last Post: 09-03-2005, 01:14 PM

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21