Thread: file size with binary write larger than it should be?

  1. #16
    - - - - - - - - oogabooga's Avatar
    Join Date
    Jan 2008
    Posts
    2,808
    I just reread the initial post and you're right. That makes more sense!
    The cost of software maintenance increases with the square of the programmer's creativity. - Robert D. Bliss

  2. #17
    - - - - - - - - oogabooga's Avatar
    Join Date
    Jan 2008
    Posts
    2,808
    If he really needs to save space, he could probably manually pack two characters into a single 16-bit char.
    The cost of software maintenance increases with the square of the programmer's creativity. - Robert D. Bliss

  3. #18
    Registered User
    Join Date
    Feb 2012
    Posts
    10
    Quote Originally Posted by oogabooga View Post
    It looks like you may have 16-bit ints as well.
    sizeof() actually doesn't return the number of bytes, but the number of chars.
    One last piece of code. Please try this:
    Code:
    #include <stdio.h>
    #include <limits.h>
    
    int main() {
        printf("%d %d\n", INT_MIN, INT_MAX);
        return 0;
    }
    This code yields:

    -32768 -1

    So my Max int is -1? I may be a novice with C, and correct me if I'm wrong, but that seems like total nonsense.

    Also, returning sizeof(struct glog) yields 44 like you say, yet its my actual file size that says 88 bytes (just a simple windows text file). However when I just count with my fingers and toes all the bytes that should be in the struct, I get 80.

    But now that you have mentioned the 16 bit char being an architectural design, it makes sense. I have a total of 8 'char' types in my struct, so I was expecting those to only hold up 1 byte each. But it seems instead of the 80 I expect when I count, I get 88 in the final file size (i.e. the chars are double the size I expected as the theory is going).

    Whether that makes sense or it doesn't, the problem still remains that I need to find a way to cut down on my data usage for this file I am writing. If its true that my char's are 16 bit types in my architecture, is there any way through C that I can possibly define a macro or something along those lines, to create a legitimate 8 bit type to replace my chars with?

  4. #19
    Registered User
    Join Date
    Feb 2012
    Posts
    10
    Quote Originally Posted by oogabooga View Post
    If he really needs to save space, he could probably manually pack two characters into a single 16-bit char.
    Could you elaborate on this idea a bit? I get it at a high level and I think if I come up with the right implementation, there should be no big problem with doing this (will be a lot more annoying than the original 'fwrite(glog)' I thought I was going to get away with). But how can something like cutting two 16 bit types in half and sticking them together work as far as code?

  5. #20
    - - - - - - - - oogabooga's Avatar
    Join Date
    Jan 2008
    Posts
    2,808
    Could you elaborate on this idea a bit?
    Something like this:
    Code:
    typedef unsigned char uchar;
    uchar a = 'b', b = 'b';
    uchar packed;
    
    // to pack
    packed = a<<8 | b;
    
    // to unpack
    a = packed >> 8;
    b = packed & 0xff;
    The cost of software maintenance increases with the square of the programmer's creativity. - Robert D. Bliss

  6. #21
    - - - - - - - - oogabooga's Avatar
    Join Date
    Jan 2008
    Posts
    2,808
    So my Max int is -1? I may be a novice with C, and correct me if I'm wrong, but that seems like total nonsense.
    I agree.

    However when I just count with my fingers and toes all the bytes that should be in the struct, I get 80.
    Considering that char, short, (AND int) are all 2 bytes on your target machine, it actually adds up to 44 16-bit chunks (which is what sizeof is reporting) or 88 bytes.
    The cost of software maintenance increases with the square of the programmer's creativity. - Robert D. Bliss

  7. #22
    Registered User
    Join Date
    May 2009
    Posts
    4,183
    @Rath: Try using unsigned to print

    Code:
    printf("%u %u\n", INT_MIN, INT_MAX);
    "...a computer is a stupid machine with the ability to do incredibly smart things, while computer programmers are smart people with the ability to do incredibly stupid things. They are,in short, a perfect match.." Bill Bryson

  8. #23
    Registered User
    Join Date
    Feb 2012
    Posts
    10
    Quote Originally Posted by stahta01 View Post
    @Rath: Try using unsigned to print

    Code:
    printf("%u %u\n", INT_MIN, INT_MAX);
    This code yields:

    32768 65535

    so it does seem that yes, my integers are 16 bits in size whether they're signed or unsigned. Hm. I don't see why this architecture makes sense from a design standpoint, maybe it was designed to use assembly and get more low level? I would assume it would incorporate fairly traditional standard types though.

    @oogabooga, thank you for that implementation. I have thought on it and it seems like if worst comes to worst and I absolutely cannot, for some reason, create a simple 1 byte data type easily on this architecture I can at the very least do a few quick shift operations before my write/read and not be eating up too many cycles on the system in the process. Just going to get a little hairy keeping track of things like 'the first 8 bits of my 16 bit char are for minutes of Time1, 2nd are Time2" etc for all these chars I have floating around

  9. #24
    - - - - - - - - oogabooga's Avatar
    Join Date
    Jan 2008
    Posts
    2,808
    It doesn't look like you'll be able to create anything but 16-bit types on that architecture. It's a 16-bit digital signal processor, and that's that. The reason for the restriction is to simplify and thus speed up the hardware.

    Perhaps the limit on your file size is unrealistic.
    The cost of software maintenance increases with the square of the programmer's creativity. - Robert D. Bliss

  10. #25
    Registered User
    Join Date
    Feb 2012
    Posts
    10
    Quote Originally Posted by oogabooga View Post
    It doesn't look like you'll be able to create anything but 16-bit types on that architecture. It's a 16-bit digital signal processor, and that's that. The reason for the restriction is to simplify and thus speed up the hardware.

    Perhaps the limit on your file size is unrealistic.
    I believe so too, alas I am a mere intern and would prefer to get it to spec rather than ask for a change in requirements on my first project

    And I beleive you are rightabout the DSP core limitation. I have been scouring documentation for the past few hours to try and find a place where it lists in the spec that both chars and ints default to 16 bit types for this processor, and it says exactly that. Bitshifting solution it is. Thank you very much for your help.

Popular pages Recent additions subscribe to a feed

Similar Threads

  1. Image(bitmap) larger size than the initial
    By nutzu2010 in forum Windows Programming
    Replies: 2
    Last Post: 08-02-2011, 06:01 AM
  2. Defining Buffer size for write to file
    By catchaat in forum C Programming
    Replies: 5
    Last Post: 03-21-2011, 02:11 AM
  3. write an array of unspecified size to a file
    By c++guy in forum C++ Programming
    Replies: 3
    Last Post: 09-22-2010, 10:54 PM
  4. Write Binary File
    By doia in forum C Programming
    Replies: 14
    Last Post: 02-26-2010, 10:20 AM
  5. Need larger shmget() size.
    By endomlic in forum C Programming
    Replies: 2
    Last Post: 04-10-2009, 02:22 PM

Tags for this Thread