Thread: Entering size of an array

  1. #16
    Lurking whiteflags's Avatar
    Join Date
    Apr 2006
    Location
    United States
    Posts
    9,613
    Memory leaks can be caused by incredibly innocent things, like throwing exceptions from constructors. Say new throws in the constructor: your object's destructor never gets called. Those member data which have destructors will get theirs called, though.

    C++ is bad, sometimes.
    Last edited by whiteflags; 02-15-2011 at 09:32 PM.

  2. #17
    Registered User ~Kyo~'s Avatar
    Join Date
    Jun 2004
    Posts
    320
    You can and will run into that in a sufficently large application for your platform. I guess that would be a reason to use vector I could see hitting an upper limit in a few situations, but not many at least on PC. Then again if Vector's constructor throws the exception it's destructor never gets called either so who knows...

  3. #18
    Lurking whiteflags's Avatar
    Join Date
    Apr 2006
    Location
    United States
    Posts
    9,613
    Actually the allocator's destructor would never be called, since the STL uses allocator objects, and I'm not privy to whether that's important or not. ~vector() will be called.

  4. #19
    Registered User ~Kyo~'s Avatar
    Join Date
    Jun 2004
    Posts
    320
    I would assume on some level they have a try... catch statement to do the correct thing either people code it or you use the class. Still might be more overhead, but then again I think vectors use *next *previous add remove or something like that for elements. Then again that adds functionality to the class if you are adding and removing objects. Generally when I am storing in int **grid I wouldn't use vector since that grid is there for the whole class instance and does not change so I don't _NEED_ the extra functions or pointers for each element that a vector would allocate. I will however put a try catch(...) block around the new call(s) in the class to check for allocated memory and free it. Using vector to me solely depends on how dynamic the code needs to be. I might only save a little space, but then again there are parts of my programs that can grow to be very large where in theory at least a few more ints worth of data can be instances of a class(dependant on where vector was avoided and how many times an instance of the vector class was avoided).

    Code:
    try
    {
        int **grid = new int*[x];
        for(int z=0;z<x;z++)
        grid[z] = new int[y]
    }
    catch (bad_alloc&)
    {
      cout << "Error allocating memory." << endl;
    }
    Last edited by ~Kyo~; 02-15-2011 at 11:23 PM. Reason: Now with example code!

  5. #20
    Lurking whiteflags's Avatar
    Join Date
    Apr 2006
    Location
    United States
    Posts
    9,613
    Well if you want my opinion on it, pointers don't break C++, C++ broke pointers.

    Yay.

    But programmers are a battered wife to C++ and it's a lot easier to try to do things its way. C++'s own features will foul up your memory management plans.

    By the way, I'm terribly wrong about what I said in the last post, but I'm tired, and the standard is incomprehensible. With the myriad of solutions available they probably picked one. I just have no idea what it is and I don't care. (One such solution is having your constructors do as little as possible or at least avoid using new, but that isn't possible always.)

  6. #21
    Registered User ~Kyo~'s Avatar
    Join Date
    Jun 2004
    Posts
    320
    Everything depends on application since, yes, there are plenty of ways to do everything. Be it through classes or through lower level coding or code. I would use asm, but I fear I would never get anywhere within a reasonable amount of time thus c and c++.

  7. #22
    C++まいる!Cをこわせ!
    Join Date
    Oct 2007
    Location
    Inside my computer
    Posts
    24,654
    Quote Originally Posted by ~Kyo~ View Post
    Overhead is overhead I have programed for IC chips with only a few kb worth of storage mostly written in asm mind you, but pointers work just as well in a class as a vector would and with a proper destructor you don't run into memory leaks unless your program already has them.
    But a PC is not an IC chip. When working with PCs, use vector. When working with chips, avoid vectors unless the overhead is acceptable. There is no reason to generally avoid it.

    Re constructors throwing: basically make to use RAII to free any resources acquired during the constructor's lifetime until it is complete. It makes sense not to call the destructor when an object is not fully constructed,
    Quote Originally Posted by Adak View Post
    io.h certainly IS included in some modern compilers. It is no longer part of the standard for C, but it is nevertheless, included in the very latest Pelles C versions.
    Quote Originally Posted by Salem View Post
    You mean it's included as a crutch to help ancient programmers limp along without them having to relearn too much.

    Outside of your DOS world, your header file is meaningless.

  8. #23
    Lurking whiteflags's Avatar
    Join Date
    Apr 2006
    Location
    United States
    Posts
    9,613
    Re constructors throwing: basically make to use RAII to free any resources acquired during the constructor's lifetime until it is complete. It makes sense not to call the destructor when an object is not fully constructed,
    Basically what I said, but I wanted to make the point that by itself, a destructor isn't going to save you from memory leaks like I thought Kyo was saying.

  9. #24
    Registered User ~Kyo~'s Avatar
    Join Date
    Jun 2004
    Posts
    320
    Your still saving overhead if you are not resizing that is my point, if you like vector, which apparently you love. Then you can use it, but it would be more efficent spacewise to use:
    Code:
    ...
    char *mycharptr1 = NULL;
    char *mycharptr2 = NULL;
    char *mycharptr3 = NULL;
    
    try
    {
       mycharptr1 = new char[1000];
       mycharptr2 = new char[1000];
       mycharptr3 = new char[1000];
    }
    catch(bad_alloc&)
    {
       cout<<"Unable to alloc memory to mycharptrs"<<endl;
       if(mycharptr1)delete mycharptr1;
       if(mycharptr2)delete mycharptr1;
       if(mycharptr3)delete mycharptr1;
       //return or what have you for error reporting.
    }
    ...
    Just because something is new and easy doesn't mean it is always the right choice.

  10. #25
    C++まいる!Cをこわせ!
    Join Date
    Oct 2007
    Location
    Inside my computer
    Posts
    24,654
    Quote Originally Posted by ~Kyo~ View Post
    Just because something is new and easy doesn't mean it is always the right choice.
    The same can be said about your manual approach. It's error prone and results in a lot more code. Why do it when you don't need it? Why throw away code that has been tested a lot and is known to be working and efficient (within the bounds of flexibility)?
    If you are just going to ignore the standard library, then you're missing out half the potential of the language.

    Btw, NULL should be nullptr, and it's safe to dete a null ptr, so no if statements are needed.

    Oh, and you have undefined behavior, too. Can you spot where?
    Last edited by Elysia; 02-17-2011 at 05:32 AM.
    Quote Originally Posted by Adak View Post
    io.h certainly IS included in some modern compilers. It is no longer part of the standard for C, but it is nevertheless, included in the very latest Pelles C versions.
    Quote Originally Posted by Salem View Post
    You mean it's included as a crutch to help ancient programmers limp along without them having to relearn too much.

    Outside of your DOS world, your header file is meaningless.

  11. #26
    Master Apprentice phantomotap's Avatar
    Join Date
    Jan 2008
    Posts
    5,108
    Code:
    try
    {
        int **grid = new int*[x];
        for(int z=0;z<x;z++)
        grid[z] = new int[y]
    }
    catch (bad_alloc&)
    {
      cout << "Error allocating memory." << endl;
    }
    Code:
    char *mycharptr1 = NULL;
    char *mycharptr2 = NULL;
    char *mycharptr3 = NULL;
    
    try
    {
       mycharptr1 = new char[1000];
       mycharptr2 = new char[1000];
       mycharptr3 = new char[1000];
    }
    catch(bad_alloc&)
    {
       cout<<"Unable to alloc memory to mycharptrs"<<endl;
       if(mycharptr1)delete mycharptr1;
       if(mycharptr2)delete mycharptr1;
       if(mycharptr3)delete mycharptr1;
       //return or what have you for error reporting.
    }
    Is that really how your code looks?

    If so, you have no idea what you are doing with regards to exceptions.

    Just because something is new and easy doesn't mean it is always the right choice.
    How out of sorts are you that you think the STL is new?




    Memory leaks can be caused by incredibly innocent things, like throwing exceptions from constructors. Say new throws in the constructor: your object's destructor never gets called.
    If a constructor leaks a resource, that would be your constructor that is broken.

    [Edit]I know the context, and I know you was making some point about a statement made by Kyo; I just don't know what it was.[/Edit]




    But programmers are a battered wife to C++ and it's a lot easier to try to do things its way.
    O_o

    But I mostly just dropped by to say that I'm ashamed of myself for giggling about that.

    Soma ;_;

  12. #27
    C++ Witch laserlight's Avatar
    Join Date
    Oct 2003
    Location
    Singapore
    Posts
    28,413
    Quote Originally Posted by ~Kyo~
    Generally when I am storing in int **grid I wouldn't use vector since that grid is there for the whole class instance and does not change so I don't _NEED_ the extra functions or pointers for each element that a vector would allocate.
    Unless your grid is not rectangular (which would make "grid" a rather misleading name), or you cannot afford to allocate so large a contiguous block of memory, you can accomplish this with:
    Code:
    std::vector<int> data(x * y);
    std::vector<int*> grid(x);
    for (std::size_t i = 0; i < x; ++i)
    {
        grid[i] = &data[i * y];
    }
    This way you avoid the pointless storage of capacity and size for each inner vector, as well as the expensive calls to new[] in the loop.
    Quote Originally Posted by Bjarne Stroustrup (2000-10-14)
    I get maybe two dozen requests for help with some sort of programming or design problem every day. Most have more sense than to send me hundreds of lines of code. If they do, I ask them to find the smallest example that exhibits the problem and send me that. Mostly, they then find the error themselves. "Finding the smallest program that demonstrates the error" is a powerful debugging tool.
    Look up a C++ Reference and learn How To Ask Questions The Smart Way

  13. #28
    Registered User ~Kyo~'s Avatar
    Join Date
    Jun 2004
    Posts
    320
    I believe I had one compiler I used a few years back that would have issues if I deleted a NULL pointer. Then again I had bloodshed giving issues compiling alot of source code at once as well. Switched to code::blocks since then and it has been fine. So my lesson back then was start everything with NULL check and then delete. Yea, copy and paste shoulda been 1 2 3 and not 1 1 1... Not the end of the world.

    It is standard to put any calls to new in a try block in case it does throw an exception. If you don't do it and it does throw well your kinda SOL since the program will more than likely die horribly. I guess you would have prefered "newer" and not new.

    @laserlight

    All I did was choose a name for an example. To my knowledge you could change that to anything and have the code still work.

    Also to my knowledge vector is fine unless you expand alot on it in which case it does end up calling new and copying itself. In theory you COULD define larger bounds on a <datatype here> ** and have the same flexibility. Vector uses new I am very much sure of that being dynamic and all. Just because you don't see the code when you implement it does not mean it is never called.

    and since everyone missed my point.

    Your still saving overhead if you are not resizing that is my point

    http://www.cplusplus.com/reference/stl/vector/
    Internally, vectors -like all containers- have a size, which represents the amount of elements contained in the vector. But vectors, also have a capacity, which determines the amount of storage space they have allocated, and which can be either equal or greater than the actual size. The extra amount of storage allocated is not used, but is reserved for the vector to be used in the case it grows. This way, the vector does not have to reallocate storage on each occasion it grows, but only when this extra space is exhausted and a new element is inserted (which should only happen in logarithmic frequence in relation with its size).
    Last edited by ~Kyo~; 02-18-2011 at 12:11 AM.

  14. #29
    Lurking whiteflags's Avatar
    Join Date
    Apr 2006
    Location
    United States
    Posts
    9,613
    Frankly, if I'm to take that quote seriously, you are actually saving space, not overhead. Overhead is basically a run-time performance cost you incur using ... almost anything. Arrays incur it when they use new[] and they would use it as much as vector does; or, they could incur function call overhead (from setting up a stack and pushing things on it) if you call, say, a copy routine. It can be a good idea to minimize both wasted space and excessive overhead cost, but you are basically complaining about allocating space for a couple of integers and room to grow.

    In that case I hope you definitely intend to use all 3000 characters and that you definitely don't need to keep track of the array size. Otherwise, even the manual solution is just as inefficient as vector is.

    Besides, if that is really your problem with vector, you should shrinkwrap them:

    vector<v::value_type>(v.begin(),v.end()).swap(v);
    Last edited by whiteflags; 02-18-2011 at 01:18 AM.

  15. #30
    C++まいる!Cをこわせ!
    Join Date
    Oct 2007
    Location
    Inside my computer
    Posts
    24,654
    I perfectly get your point, Kyo. But you seem to have missed mine. My point is that on a PC, it doesn't matter if you have 4 or 8 bytes more memory overhead per vector. On a chip where memory is limited, it might be. But not on a PC.
    So do you imply that you would avoid all the functionality in the standard library just because it has a small overhead? That's like dissing half the language functionality. It doesn't make sense if the overhead has little or no impact on your application.

    Also, a compiler that cannot delete a null pointer is broken. Broken in that it is not standard compliant. Suggest you toss it into the trash bin.

    And also, is also the case of where you put your catch statements, eh? Around every throw or just where you intend to handle the error?
    Quote Originally Posted by Adak View Post
    io.h certainly IS included in some modern compilers. It is no longer part of the standard for C, but it is nevertheless, included in the very latest Pelles C versions.
    Quote Originally Posted by Salem View Post
    You mean it's included as a crutch to help ancient programmers limp along without them having to relearn too much.

    Outside of your DOS world, your header file is meaningless.

Popular pages Recent additions subscribe to a feed

Similar Threads

  1. Invalid conversion from 'void*' to 'BYTE' help
    By bikr692002 in forum C++ Programming
    Replies: 9
    Last Post: 02-22-2006, 11:27 AM
  2. Array size
    By tigrfire in forum C Programming
    Replies: 5
    Last Post: 11-14-2005, 08:45 PM
  3. Unknown Memory Leak in Init() Function
    By CodeHacker in forum Windows Programming
    Replies: 3
    Last Post: 07-09-2004, 09:54 AM
  4. Quick question about SIGSEGV
    By Cikotic in forum C Programming
    Replies: 30
    Last Post: 07-01-2004, 07:48 PM
  5. templates, unresolved external error
    By silk.odyssey in forum C++ Programming
    Replies: 9
    Last Post: 06-09-2004, 04:39 PM

Tags for this Thread