Thread: Handling f-write

  1. #1
    Banned
    Join Date
    Jul 2022
    Posts
    112

    Handling f-write

    Handling write() correctly is far more complex than that. It has partial writes and retryable errors.
    I consider assigning errno bad form. But that might be my own hang-up.
    @hamster_nz
    The possibility of fwrite() / write() failing increases the larger the buffer ??
    Last edited by kodax; 11-21-2022 at 10:29 PM.

  2. #2
    and the hat of int overfl Salem's Avatar
    Join Date
    Aug 2001
    Location
    The edge of the known universe
    Posts
    39,659
    Why do you believe a larger buffer size would be a problem?

    If you can't understand all the caveats, maybe you should stick to fgetc and fputc
    If you dance barefoot on the broken glass of undefined behaviour, you've got to expect the occasional cut.
    If at first you don't succeed, try writing your phone number on the exam paper.

  3. #3
    Banned
    Join Date
    Jul 2022
    Posts
    112
    Handling write() correctly is far more complex than that. It has partial writes and retryable errors.
    He could be right.

    In fact, there is a possibility that write()
    returns partial writes for larger buffers.

  4. #4
    Registered User
    Join Date
    Sep 2020
    Posts
    425
    Quote Originally Posted by kodax View Post
    He could be right.

    In fact, there is a possibility that write()
    returns partial writes for larger buffers.
    It is written in the 'man' page:

    The number of bytes written may be less than count if, for
    example, there is insufficient space on the underlying physical
    medium, or the RLIMIT_FSIZE resource limit is encountered (see
    setrlimit(2)), or the call was interrupted by a signal handler
    after having written less than count bytes. (See also pipe(7).)

  5. #5
    Banned
    Join Date
    Jul 2022
    Posts
    112
    I am not sure, but a larger buffer will have quite an impact,
    especially if you are short on RAM..

    fwrite consume free memory continuously - C++ Forum

  6. #6
    Registered User
    Join Date
    May 2012
    Location
    Arizona, USA
    Posts
    948
    Did you even read that thread? Especially the comment by jsmith:
    Normal behavior. The data you are writing (to disk) is being written to the page cache as well, and the kernel will expand the page cache as long as there is memory available (to a certain point).
    The memory is being "used" by the kernel for the disk cache, but that's not a problem because the kernel can and will free up memory used by the disk cache when memory is needed by other things.

    Why not just simply do something like this to ensure you write everything in your buffer:

    Code:
    // char buf[...]; // the buffer to write
    // size_t size; // the size of the buffer
    size_t start = 0;
    do {
      ssize_t n = write(fd, buf + start, size - start);
      if (n < 0) {
        // Write error. errno is set. Handle it however you want to.
      }
      start += n;
    } while (start < size);
    And fwrite automatically writes out as much as possible unless it encounters a write error (it does essentially what the above code does). So if you have a FILE handle, use fwrite and don't worry about it. If you only have a file descriptor and want to write everything out despite partial writes, use something like the above code.

  7. #7
    Banned
    Join Date
    Jul 2022
    Posts
    112
    Usually bufsize is 1024 and sometimes 4096.
    The point was about the chance of a larger buffer failing..
    Last edited by kodax; 11-22-2022 at 01:14 PM.

  8. #8
    Registered User
    Join Date
    Sep 2020
    Posts
    425
    Quote Originally Posted by kodax View Post
    Usually bufsize is 1024 and sometimes 4096.
    The point was about the chance of a larger buffer failing..
    You still haven't said what 'failing' mean in your context? write() and fwrite() should do only what is described in their documentation. If they do that then they have succeeded.

    The 'bufsize' buffer of a FILE is in the user's process - anything written to that and not flushed to disk can be lost, even though fwrite() says it has been written OK. If you want to be sure that the data has been passed to the OS and written using the write() system call, then you need to use fflush() and check it's return value too. Once again, from the documentation:

    ...
    For output streams, fflush() forces a write of all user-space buffered data for the given output or update stream via the stream's underlying write function.
    ...
    Upon successful completion 0 is returned. Otherwise, EOF is returned and errno is set to indicate the error.
    ...
    To ensure that the data is physically stored on disk the kernel buffers must be flushed too, for example, with sync(2) or fsync(2).

  9. #9
    Registered User
    Join Date
    May 2012
    Location
    Arizona, USA
    Posts
    948
    Quote Originally Posted by kodax View Post
    Usually bufsize is 1024 and sometimes 4096.
    The point was about the chance of a larger buffer failing..
    It depends on the system (some systems might support smaller atomic writes than 1024 bytes; I think only 512 bytes is guaranteed to be atomic in many cases, especially pipes). Either way, you know that a write syscall can be interrupted somehow and result in a partial write, so why worry about the chances of it failing or being partially completed? Just write your code so it can handle partial writes (and errors, of course).

  10. #10
    Banned
    Join Date
    Jul 2022
    Posts
    112
    Is using a bigger buffer useful?

    I use buffer for quite a long time when I need to copy a stream or read a file.

    And every time I set my buffer size to 2048 or 1024, but from my point of view a buffer is like a "bucket" which carries my "sand" (stream) from one part of my land (memory) to an other part.

    So, increase my bucket capacity will in theory allow me to do less travel? Is this a good things to do in programming?

    Sep 6, 2012
    Cyrbil
    c++ - Is using a bigger buffer useful? - Software Engineering Stack Exchange


    This guy describes the correct theoretical scenario,
    which in itself is a paradox.

    How can one carry a giant sand bucket,
    even if there were many people helping ?
    This would never work for so many reasons.

    I believe the chance of failure increases with a larger buffer..
    Last edited by kodax; 11-23-2022 at 05:05 AM.

  11. #11
    and the hat of int overfl Salem's Avatar
    Join Date
    Aug 2001
    Location
    The edge of the known universe
    Posts
    39,659
    > I believe the chance of failure increases with a larger buffer..
    Explain your dis-belief for the benefit of the rest of us.
    If you dance barefoot on the broken glass of undefined behaviour, you've got to expect the occasional cut.
    If at first you don't succeed, try writing your phone number on the exam paper.

  12. #12
    Banned
    Join Date
    Jul 2022
    Posts
    112
    8.6. Obtaining Large Buffers

    As we have noted in previous sections, allocations of large, contiguous memory buffers are prone to failure. System memory fragments over time, and chances are that a truly large region of memory will simply not be available. Since there are usually ways of getting the job done without huge buffers, the kernel developers have not put a high priority on making large allocations work. Before you try to obtain a large memory area, you should really consider the alternatives. By far the best way of performing large I/O operations is through scatter/gather operations, which we discuss in Chapter 15.
    8.6. Obtaining Large Buffers

  13. #13
    Registered User
    Join Date
    May 2012
    Location
    Arizona, USA
    Posts
    948
    You're moving the goalposts. First it was about the chances of a write or fwrite failing with a large buffer. Now it's about allocating large buffers. What's next? How much does a large buffer weigh? (Oh wait, you kind of already did in #10 when you compared a large buffer to a giant sand bucket.)

    What exactly is your question or issue?

    Edit to add: Your link is about memory allocations inside the Linux kernel. User space memory allocations have different characteristics. Don't confuse the two.

  14. #14
    Banned
    Join Date
    Jul 2022
    Posts
    112
    From my observations, speed is reduced with large 8mb buffers, as compared to 256kb buffers..
    Without a doubt, large buffers decrease speed significantly.

  15. #15
    Registered User rstanley's Avatar
    Join Date
    Jun 2014
    Location
    New York, NY
    Posts
    1,110
    Quote Originally Posted by kodax View Post
    From my observations, speed is reduced with large 8mb buffers, as compared to 256kb buffers..
    Without a doubt, large buffers decrease speed significantly.
    Then show the code that proves your theory!

    You seem to pull quotes from sources that match your alleged theories whether is is valid or not for the current topic. There are people that still believe that the earth is flat, and probably quote sources also to support their invalid theories! ;^)

Popular pages Recent additions subscribe to a feed

Similar Threads

  1. Lot of WRITE calls in parallel in C WRITE test code
    By Bobby1995 in forum C Programming
    Replies: 2
    Last Post: 04-28-2020, 08:47 AM
  2. Replies: 9
    Last Post: 09-09-2014, 08:23 AM
  3. File Handling -Read write and edit a file
    By aprop in forum C Programming
    Replies: 3
    Last Post: 02-27-2010, 02:01 PM
  4. signal handling and exception handling
    By lehe in forum C++ Programming
    Replies: 2
    Last Post: 06-15-2009, 10:01 PM
  5. handling file rb+ (write back into self)
    By IsmAvatar2 in forum C Programming
    Replies: 5
    Last Post: 01-23-2007, 09:47 PM

Tags for this Thread