Like Tree9Likes

Concepts of memory

This is a discussion on Concepts of memory within the C++ Programming forums, part of the General Programming Boards category; Hello, I had a few questions about memory. I scraped up these lectures today while wandering the web. Free Online ...

  1. #1
    Bored Programmer
    Join Date
    Jul 2009
    Location
    Tomball, TX
    Posts
    407

    Concepts of memory

    Hello,

    I had a few questions about memory. I scraped up these lectures today while wandering the web.
    Free Online Computer Science Course | Bits | Harvard Extension School

    While listening to the lectures about entropy and bit storage I couldn't help but feel like I was using way to much data over the course of my programs. In situations where I use built in data types like int that never reach anywhere near their maximum possibility am I wasting a ton of data at runtime if my average number in my program only uses numbers under 1000?

    If a 32 bit int is used for a number that has a max value of 1000 (and never is negative) am I wasting 21 bits for every integer I declare in the program? If I had say 1000 classes active that all carried these integers in them would I be wasting 21,000 bits as it could be reprsented as 11 bits (or possibly less using frequency compression concepts) of information instead?

    If these do reduce space is it storage space (size of the file) or the runtime memory I guess that you would see in the RAM.

    Sorry if one question turned into five. I just became paranoid that I was throwing tons of bits in the trash (declaring and alloting but never using them) in my programs.

    I realized I am kinda confused on the subject.

  2. #2
    SAMARAS std10093's Avatar
    Join Date
    Jan 2011
    Location
    Nice, France
    Posts
    2,681
    You have to calm down. I remember that when memory was precious, people in C used to handle unions so that they can gain some space. I think you are overreacting
    Code - functions and small libraries I use


    It’s 2014 and I still use printf() for debugging.


    "Programs must be written for people to read, and only incidentally for machines to execute. " —Harold Abelson

  3. #3
    Bored Programmer
    Join Date
    Jul 2009
    Location
    Tomball, TX
    Posts
    407
    In gaming memory still is precious. ;o)
    Last edited by Lesshardtofind; 12-06-2012 at 03:26 AM.

  4. #4
    SAMARAS std10093's Avatar
    Join Date
    Jan 2011
    Location
    Nice, France
    Posts
    2,681
    Have you ever looked into how many games treat memory?
    Code - functions and small libraries I use


    It’s 2014 and I still use printf() for debugging.


    "Programs must be written for people to read, and only incidentally for machines to execute. " —Harold Abelson

  5. #5
    Cat without Hat CornedBee's Avatar
    Join Date
    Apr 2003
    Posts
    8,893
    The point is, int is easiest to handle. You don't have to think too much about it, and the CPU is usually most efficient with it. Unless you're in a situation where memory is really scarce or you have millions and millions of data points (not just a few thousand), stay with int.
    All the buzzt!
    CornedBee

    "There is not now, nor has there ever been, nor will there ever be, any programming language in which it is the least bit difficult to write bad code."
    - Flon's Law

  6. #6
    Internet Superhero
    Join Date
    Sep 2006
    Location
    Denmark
    Posts
    964
    You can use short if you want to really save memory, but on modern computers the difference is so negligible that it's usually not worth the hassle.

    Edit: Also, i don't think there is a guarantee from the standard that a short is less than an int, so it might not get you a memory advantage in the end anyways.
    How I need a drink, alcoholic in nature, after the heavy lectures involving quantum mechanics.

  7. #7
    Cat without Hat CornedBee's Avatar
    Join Date
    Apr 2003
    Posts
    8,893
    Quote Originally Posted by Neo1 View Post
    Edit: Also, i don't think there is a guarantee from the standard that a short is less than an int, so it might not get you a memory advantage in the end anyways.
    If you're ever in the situation where you have to care about the size of your data, use the <cstdint> header. It contains types with guaranteed sizes. For example, int16_t is guaranteed to have 16 bits (but is not guaranteed to actually exist), while int_least16_t is guaranteed to exist, guaranteed to have at least 16 bits, and will be the smallest such type the compiler can provide. (Meaning that on an architecture with 9-bit bytes, it might be 18 bits large, or even 36 bits if the machine cannot address smaller things, but there won't be a different type that you could use instead.)
    rogster001 and stahta01 like this.
    All the buzzt!
    CornedBee

    "There is not now, nor has there ever been, nor will there ever be, any programming language in which it is the least bit difficult to write bad code."
    - Flon's Law

  8. #8
    Captain Crash brewbuck's Avatar
    Join Date
    Mar 2007
    Location
    Portland, OR
    Posts
    7,239
    Lesshardtofind, you are talking like a hardware designer! (Not a bad thing)

    Memory usage and performance are inextricably linked through the CPU cache. The more data you can fit in cache, the fewer cache misses per datum will be incurred. However, the processor might be faster when working with some data types rather than others. The tradeoff may be different from processor to processor. And the data access pattern matters as well -- the CPU/cache may be efficient at streaming access but worse at random access.

    Let actual requirements drive your decisions, not some vague need to be "more efficient." Decide how much memory you need the thing to use and how fast it has to work, then figure out how to satisfy those requirements.
    Code:
    //try
    //{
    	if (a) do { f( b); } while(1);
    	else   do { f(!b); } while(1);
    //}

  9. #9
    Bored Programmer
    Join Date
    Jul 2009
    Location
    Tomball, TX
    Posts
    407
    Thanks for the responses all. I appreciate the information.

    I have read some more about how the processor has trouble handling things smaller than 32 bits or sometimes just won't. So I thought of some ways to make one int represent multiple different types of data so that all 32 bits were used, but I realized the drawback would come with arithmetic when needing to perform arithmetic one data point in the integer as I woukd either have to write my own functions for this arithmetic or just seperate them which would require more CPU calls and probably negate any memory saved.

    Though I guess when transfering data, compressing two or three int into one could save space and also maybe with file save functions reducing every variable down to only its needed size could be useful.

    As for the why I am so interested in saving space: I am producing a game for the Indie xboxlive market and I have run into a few points where I can't do what I want with my classes because having 500 (bullets) on the screen at once starts to slow the gpu. I realize that 250 bullets that do twice the damage would accomplish the same, but I want some boss battles to be visually overwhelming for the user, making deaths more likely to occur.

    As with any new subjects that I learn every answer I find has my brain posing six more questions on its implications. I am starting to wonder if these frameworks (XNA game studio). Carry a bit of overhead along with them.

    I did notice no one has addressed the example as to if these 21 bits are wasted as file size memory or ram.

    Also I learned how to make a particle engine from a tutorial site and just recently thought about the fact that the author had said to put a particle owning a texture. Well if I have 5000 particles on screen (which I do occasionally.) And they all share one picture why do I have 5000 diff particle textures when the engine could just own one and pass it to the renderer. While I would still have to make 5000 calls with the texture I am guessing it would reduce file size but overhead woukd still be the same.

  10. #10
    Registered User
    Join Date
    Oct 2006
    Posts
    2,407
    Quote Originally Posted by Neo1 View Post
    i don't think there is a guarantee from the standard that a short is less than an int, so it might not get you a memory advantage in the end anyways.
    what the standard guarantees is this: sizeof(char) <= sizeof(short) <= sizeof(int) <= sizeof(long) <= sizeof(long long)

    long long is only part of the new C++11 standard, but existed as an extension in many compilers before.

  11. #11
    Registered User
    Join Date
    Apr 2006
    Posts
    2,032
    Memory is cheap, but the cache is small. So it is ok to have a large resevoir of infriquently accessed data in memory. But for things that are frequently accessed, you want to try to limmit it to as few 64 -Byte chunchs as you can.

    This does mean that there is some speedup to having arrays use char or short instead of int, but typical classes are less than 64 bytes, so shrinking them further is generally pointless. In those cases, using int is prefered, because that's the easiest thing for the CPU to handle.
    It is too clear and so it is hard to see.
    A dunce once searched for fire with a lighted lantern.
    Had he known what fire was,
    He could have cooked his rice much sooner.

  12. #12
    Cat without Hat CornedBee's Avatar
    Join Date
    Apr 2003
    Posts
    8,893
    As for the why I am so interested in saving space: I am producing a game for the Indie xboxlive market and I have run into a few points where I can't do what I want with my classes because having 500 (bullets) on the screen at once starts to slow the gpu.
    I think it's rather unlikely that this has anything to do with memory. But you would have to profile your game to find out where it really spends its time.
    Anyway, the GPU generally deals in single-precision floats (32 bits) and there's little you can do about it. GPUs are, after all, pretty specialized in what they do. They have very impressive data throughput, but at the cost of generality.
    So generally speaking, changing from 32-bit integers to 16-bit integers is going to do absolutely nothing to the GPU because it won't ever see the difference.
    King Mir likes this.
    All the buzzt!
    CornedBee

    "There is not now, nor has there ever been, nor will there ever be, any programming language in which it is the least bit difficult to write bad code."
    - Flon's Law

  13. #13
    Bored Programmer
    Join Date
    Jul 2009
    Location
    Tomball, TX
    Posts
    407
    Well one of the lectures covered that data can only be transfered error free when the Channel capacity is not exceeded. It was another lecture on Shannon's rules about data transfer. Thats where I was thinkng reducing the amount of data required to make a graphical command through the system would reduce the chance of errors or (appearance of lag) on the screen, but as you pointed out without a profiler I have not idea what is actually lagging the system.

    I could be guessing that the GPU is overloaded when its just waiting on the CPU to give it coordinates and vice versa. I will have to look into how to profile a XNA Game_Sudio 4.0 file in VS 2010 express. I should have just started at that solution, but the lectures really opened my mind to how data works and I wanted to apply the information lol.

  14. #14
    Registered User C_ntua's Avatar
    Join Date
    Jun 2008
    Posts
    1,853
    You first need to see what the APIs require, so what actual data structure the graphical methods have that will transfer it to the GPU. So if you were to change the color of a set of pixels and you are passing an array, this question makes sense if you actually have a choice between int16 and int32 on the given API, which I would doubt. The point is that you most likely want to avoid trasnforming your data all the time, so compressing the data to save space can mean that the GPU will be waiting for the CPU to decompress the data before it passes it.

  15. #15
    Registered User
    Join Date
    Jun 2005
    Posts
    6,290
    Quote Originally Posted by Elkvis View Post
    what the standard guarantees is this: sizeof(char) <= sizeof(short) <= sizeof(int) <= sizeof(long) <= sizeof(long long)
    Not true. In practice it usually works out that sizeof(char) <= sizeof(short) <= sizeof(int) <= sizeof(long) <= sizeof(long long). However, that is not actually a requirement in the standard.

    The standard only requires that the set of values that can be represented by a char is a subset of the values that can be represented by a short, which in turn is a subset of the values that can be represented by a short .....
    Right 98% of the time, and don't care about the other 3%.

Page 1 of 3 123 LastLast
Popular pages Recent additions subscribe to a feed

Similar Threads

  1. Some basic C++ concepts
    By tx1988 in forum C++ Programming
    Replies: 12
    Last Post: 09-15-2007, 08:16 PM
  2. Same Language = two concepts
    By swgh in forum C++ Programming
    Replies: 1
    Last Post: 03-09-2006, 05:28 PM
  3. Need help with concepts for first game
    By blankstare77 in forum Game Programming
    Replies: 29
    Last Post: 10-01-2005, 08:53 AM
  4. Programming concepts
    By lyx in forum C++ Programming
    Replies: 2
    Last Post: 12-02-2003, 11:37 PM

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21