Thread: HDD block size question

  1. #1
    Ugly C Lover audinue's Avatar
    Join Date
    Jun 2008
    Location
    Indonesia
    Posts
    489

    HDD block size question

    They said if we have a bigger block size on our HDD, file operations will be ultimately faster.

    Logically, this is acceptable.

    I have a HDD with 64K block size (biggest in NT family) but it seems there is no difference in speed to my another HDD with standard 4K block size.

    Note: Both of them are Seagate and have the same size and format: NTFS.

    What's wrong?
    Just GET it OFF out my mind!!

  2. #2
    C++まいる!Cをこわせ!
    Join Date
    Oct 2007
    Location
    Inside my computer
    Posts
    24,654
    I'd be careful with such big block sizes, however. It means a lot of wasted space.
    If a file is anything lower than 64KB, then the "physical" space on disk will actually be 64KB, hence the 64KB - Actual File Size is wasted (meaning unusable)!
    Quote Originally Posted by Adak View Post
    io.h certainly IS included in some modern compilers. It is no longer part of the standard for C, but it is nevertheless, included in the very latest Pelles C versions.
    Quote Originally Posted by Salem View Post
    You mean it's included as a crutch to help ancient programmers limp along without them having to relearn too much.

    Outside of your DOS world, your header file is meaningless.

  3. #3
    Woof, woof! zacs7's Avatar
    Join Date
    Mar 2007
    Location
    Australia
    Posts
    3,459
    They said if we have a bigger block size on our HDD, file operations will be ultimately faster.

    Logically, this is acceptable.
    Well yes and no. There is a point where operations will in fact become slower (There is a sweet spot). Among the other downsides of a large block size.

  4. #4
    Registered User
    Join Date
    Dec 2006
    Location
    Canada
    Posts
    3,229
    The speed goes asymptotically to the optimal speed as you increase the block size. There is a point at which increasing it further just won't matter anymore. I'm guessing that point is somewhere around 4K, and that's why Microsoft set it to be the default. Beyond that, you will be wasting a lot of space, with negligible gain in speed.

    Try comparing 64 bytes block size to 4K and you will probably see a difference.

  5. #5
    and the hat of int overfl Salem's Avatar
    Join Date
    Aug 2001
    Location
    The edge of the known universe
    Posts
    39,656
    > Try comparing 64 bytes block size to 4K and you will probably see a difference.
    The sector size is 512 bytes.

    > They said if we have a bigger block size on our HDD, file operations will be ultimately faster.
    Windows will try to allocate clusters consecutively on disk anyway, so
    4 + 4 + 4 + 4 all on the same track is no different to 16 (on the same track).

    What really kills HD performance is that average seek times are still measured in milliseconds.
    That is, >1M times slower than your typical processor.
    In human terms, that's 1 second to 12 DAYS!
    If you dance barefoot on the broken glass of undefined behaviour, you've got to expect the occasional cut.
    If at first you don't succeed, try writing your phone number on the exam paper.

  6. #6
    C++まいる!Cをこわせ!
    Join Date
    Oct 2007
    Location
    Inside my computer
    Posts
    24,654
    Small cluster sizes can kill your defragmenter, though. I know because I've tried
    Quote Originally Posted by Adak View Post
    io.h certainly IS included in some modern compilers. It is no longer part of the standard for C, but it is nevertheless, included in the very latest Pelles C versions.
    Quote Originally Posted by Salem View Post
    You mean it's included as a crutch to help ancient programmers limp along without them having to relearn too much.

    Outside of your DOS world, your header file is meaningless.

  7. #7
    Ugly C Lover audinue's Avatar
    Join Date
    Jun 2008
    Location
    Indonesia
    Posts
    489
    Quote Originally Posted by Elysia View Post
    Small cluster sizes can kill your defragmenter, though. I know because I've tried
    I wrote a defragmenter program two years ago hence I use 64K block size instead of 4K.

    In approx. 120 GB HDD, defragmenting 64K just take less than 30 minutes, it takes 60 minutes++ in 4K.

    The main drawback for the 64K is small related files such as photo album, offline websites, hentais :P or anything contains a lot of small files should be archived in mountable archive format such as Zip for quick access.
    Last edited by audinue; 08-27-2009 at 05:10 PM.
    Just GET it OFF out my mind!!

Popular pages Recent additions subscribe to a feed

Similar Threads

  1. Template overload of operator ++/--
    By Elysia in forum C++ Programming
    Replies: 26
    Last Post: 10-23-2007, 08:45 AM
  2. Replies: 2
    Last Post: 11-24-2005, 01:30 AM
  3. An exercise in optimization
    By Prelude in forum Contests Board
    Replies: 10
    Last Post: 04-29-2005, 03:06 PM
  4. Create Array size Question
    By popohoma in forum C++ Programming
    Replies: 3
    Last Post: 11-04-2002, 03:04 AM