Thread: Smart pointer class

  1. #31
    C++まいる!Cをこわせ!
    Join Date
    Oct 2007
    Location
    Inside my computer
    Posts
    24,654
    Code:
    	std::vector<BYTE> v;
    	v.push_back(0);
    	v.reserve(10000);
    	BYTE& p = v[0];
    	ZeroMemory(&p, 10000);
    Good or bad?
    Last edited by Elysia; 10-31-2007 at 05:55 PM.

  2. #32
    and the hat of sweating
    Join Date
    Aug 2007
    Location
    Toronto, ON
    Posts
    3,545
    Quote Originally Posted by Elysia View Post
    I haven't looked into the internals of the vector class, but does it actually put these elements next to each other, like a native array?
    Yes, see my last post.

  3. #33
    C++まいる!Cをこわせ!
    Join Date
    Oct 2007
    Location
    Inside my computer
    Posts
    24,654
    I just missed it when replying.
    Anyway, I rewrote some code to use a vector instead of the memory manager. However, I shouldn't blame myself if I get lots of memory errors / faults, since the code was written with C-style buffers in mind, using pointers and keeping track of when needing to resize and so on.
    Deprecated array support for class. Will remove if there are no prolems. Now I'm thinking of going to bed because I'm tired.
    Fun discussing with you all, though.
    Last edited by Elysia; 10-31-2007 at 06:22 PM.

  4. #34
    Registered User
    Join Date
    Sep 2006
    Posts
    835
    Assuming ZeroMemory() is zeroing elements in an array-based fashion (p[0] = 0, p[1] = 0, ...), you should use resize instead of reserve, since reserve only sets v's capacity, not its size (or just use the constructor
    Code:
      std::vector<BYTE> v(10000);
    which creates v with size 10000, eliminating the need for the reserve). Personally I would have written this as
    Code:
      std::vector<BYTE> v;
      v.reserve(10000);
      for (int i=0; i<10000; ++i) v.push_back(0);
    since using push_back() exclusively takes care of the size automatically (you don't ever actually need reserve(), it's just an optimization and never affects correctness).

    Edit: I'm not sure whether the 1-argument constructor mentioned above is guaranteed to zero-initialize the elements the way you want - if it is, you could just write
    Code:
      std::vector<BYTE> v(10000);
    and be done with it. Anyone know?
    Last edited by robatino; 11-01-2007 at 02:37 AM.

  5. #35
    Cat without Hat CornedBee's Avatar
    Join Date
    Apr 2003
    Posts
    8,895
    Quote Originally Posted by Elysia View Post
    Code:
    void foo(const pp<int>& p)
    {
    	int n = *p; // No workie because operator * and operator T* isn't const.
    }
    So make them. My point was that you shouldn't be dealing with const T* in your class.

    Ah well, but as the saying goes: What isn't broken, don't fix. It can remain unsigned and the critical section will take care of the locking, so it shouldn't be a problem.
    True. I was just pointing out that the reduced maximum of LONG vs DWORD isn't realistically any obstacle.

    No. Good. Polymorphism is good. If I don't use Virtuals, I'll have to tear the class apart and re-write already existing code when creating new ones. THAT is bad.
    Wrong. You don't. You can use base classes for implementation sharing. shared_ptr and shared_array do it.
    Polymorphism is for when you actually have a reference to the base class. But an implementation-sharing base class should never be referenced. It should be an implementation detail.
    Polymorphism is for runtime decisions.

    BTW, there's no C++ safe copying method? memcpy and CopyMemory are obviously bad.
    std::copy() is as safe as anything. std::uninitialized_copy() if the target area doesn't have objects in it yet.

    Tut-tut. m_PInfoTemp caches the lookup for pointer p so it doesn't have to look up twice or more if it's going to lookup p again.
    But the mere fact that p is set ought to be enough information.
    Also, it again comes down to having a single place where lookup is done. And by that I don't mean just a single function, because you can still call the function multiple times. I mean that, in the flow chart of pointer construction/assignment, the lookup is only done once.

    It doesn't work like it should:
    Code:
    pp<CRect> foo()
    {
    	return NULL; // error C2664: 'CMemoryManager<T>::CMemoryManager(const CMemoryManager<T> &)' : cannot convert parameter 1 from 'int' to 'const CMemoryManager<T> &'
    
    }
    Remove explicit and it works. But it also works to return a bool >_< Which I why I used PPNULL in the first place.
    The standard idiom for getting a null smart pointer is to default construct it, i.e.
    return CMemoryManager<T>();

    Better safe than sorry.
    explicit only has an effect on constructors that can be called with one argument. The keyword is completely ignored otherwise. So you're not in any way safe.
    All the buzzt!
    CornedBee

    "There is not now, nor has there ever been, nor will there ever be, any programming language in which it is the least bit difficult to write bad code."
    - Flon's Law

  6. #36
    C++まいる!Cをこわせ!
    Join Date
    Oct 2007
    Location
    Inside my computer
    Posts
    24,654
    Quote Originally Posted by robatino View Post
    Assuming ZeroMemory() is zeroing elements in an array-based fashion (p[0] = 0, p[1] = 0, ...), you should use resize instead of reserve, since reserve only sets v's capacity, not its size (or just use the constructor
    Code:
      std::vector<BYTE> v(10000);
    which creates v with size 10000, eliminating the need for the reserve). Personally I would have written this as
    Code:
      std::vector<BYTE> v;
      v.reserve(10000);
      for (int i=0; i<10000; ++i) v.push_back(0);
    since using push_back() exclusively takes care of the size automatically (you don't ever actually need reserve(), it's just an optimization and never affects correctness).

    Edit: I'm not sure whether the 1-argument constructor mentioned above is guaranteed to zero-initialize the elements the way you want - if it is, you could just write
    Code:
      std::vector<BYTE> v(10000);
    and be done with it. Anyone know?
    I just wanted to copy something into the array and make sure there were no errors when deleting that array later (by std::vector of course).

    Quote Originally Posted by CornedBee View Post
    So make them. My point was that you shouldn't be dealing with const T* in your class.
    But I do not see why not. If the functions do not change the T*, why not const? If they try to assign it somewhere, obviously it should not be const. So far as I'm aware, const is to catch design time bugs mostly.

    Wrong. You don't. You can use base classes for implementation sharing. shared_ptr and shared_array do it.
    Polymorphism is for when you actually have a reference to the base class. But an implementation-sharing base class should never be referenced. It should be an implementation detail.
    Polymorphism is for runtime decisions.
    Then explain exactly how I can implememt a thread-safe class without re-writing the code. The thread-safety is done is protected functions done in the base class. Merely overriding them in the derived class won't get me anywhere, not unless I make them virtual.
    Without that, I don't see how to do it without re-writing the entire darn class, which is not something I want to do.

    But the mere fact that p is set ought to be enough information.
    Also, it again comes down to having a single place where lookup is done. And by that I don't mean just a single function, because you can still call the function multiple times. I mean that, in the flow chart of pointer construction/assignment, the lookup is only done once.
    I don't feel that's a good idea. To make the flow chart dynamic is better, since it allows me to do what must be done if it hasn't been done already. Dynamic flowchart also decreases the chance for example, to access a NULL pointer (because the lookup function wasn't called).
    It's also what I've built the class upon - dynamic flowchart. Determine what needs to be done and do it. So simple, yet so effective.

    The standard idiom for getting a null smart pointer is to default construct it, i.e.
    return CMemoryManager<T>();
    Sounds better to me to use PPNULL.

    explicit only has an effect on constructors that can be called with one argument. The keyword is completely ignored otherwise. So you're not in any way safe.
    And not even a warning from the compiler. How selfish of it.

  7. #37
    Cat without Hat CornedBee's Avatar
    Join Date
    Apr 2003
    Posts
    8,895
    Quote Originally Posted by Elysia View Post
    But I do not see why not. If the functions do not change the T*, why not const?
    Because it's not the pointer's responsibility to ensure that the pointee doesn't change. Or in other words, you don't need const T inside your pointer class because you have the design error the moment you dereference the pointer, not the moment you change it. The only exception is the pointer's own operator* overload.
    It comes down to the pointer behaving like a raw pointer where it matters. A raw pointer doesn't care about whether its pointee is const or not. That's for the user of the pointer to worry about. The pointer just points and carries the constness along unchanged.
    Your pointer changes the constness, and that's bad.


    Then explain exactly how I can implememt a thread-safe class without re-writing the code. The thread-safety is done is protected functions done in the base class. Merely overriding them in the derived class won't get me anywhere, not unless I make them virtual.
    Without that, I don't see how to do it without re-writing the entire darn class, which is not something I want to do.
    Gimme an hour (but it may be interrupted by lunch), and I'll show you.

    I don't feel that's a good idea. To make the flow chart dynamic is better, since it allows me to do what must be done if it hasn't been done already.
    A static flowchart approach allows you to construct reliable flowcharts where these checks aren't necessary except as debug-mode asserts. Thus, your pointer is reliable without the runtime overhead of checking whether it's done its thing already.
    Your approach is basically, write potentially buggy code, but instead of finding the bugs and solving them, you look for the effects of the bugs and solve those. You're curing the symptoms, not the illness.

    Dynamic flowchart also decreases the chance for example, to access a NULL pointer (because the lookup function wasn't called).
    A static flowchart allows you to analyze at design time if a NULL pointer can be accessed, without any runtime overhead. Your dynamic approach makes this considerably harder.

    It's also what I've built the class upon - dynamic flowchart. Determine what needs to be done and do it. So simple, yet so effective.
    No, it's neither simple nor effective. It's complex in implementation, and leads to code repetition and unnecessary performance overhead, while not increasing reliability.

    Sounds better to me to use PPNULL.
    *shrug* Matter of habit, I suppose.
    Note that the idiom is consistent, though:
    Code:
    typedef int* pint;
    pint p = pint();
    assert(p == 0);
    And not even a warning from the compiler. How selfish of it.
    Well, why would it warn you? Compilers only generate warnings for things that are potentially problematic.
    All the buzzt!
    CornedBee

    "There is not now, nor has there ever been, nor will there ever be, any programming language in which it is the least bit difficult to write bad code."
    - Flon's Law

  8. #38
    C++まいる!Cをこわせ!
    Join Date
    Oct 2007
    Location
    Inside my computer
    Posts
    24,654
    Quote Originally Posted by CornedBee View Post
    Gimme an hour (but it may be interrupted by lunch), and I'll show you.
    Sure. Looking forward to that.

    A static flowchart approach allows you to construct reliable flowcharts where these checks aren't necessary except as debug-mode asserts. Thus, your pointer is reliable without the runtime overhead of checking whether it's done its thing already.
    Your approach is basically, write potentially buggy code, but instead of finding the bugs and solving them, you look for the effects of the bugs and solve those. You're curing the symptoms, not the illness.

    A static flowchart allows you to analyze at design time if a NULL pointer can be accessed, without any runtime overhead. Your dynamic approach makes this considerably harder.

    No, it's neither simple nor effective. It's complex in implementation, and leads to code repetition and unnecessary performance overhead, while not increasing reliability.
    I cannot agree. From what I see, it makes the code more robust and safe at the expense of a little more overhead.

    Here's a good one:
    Know any way to restrict this polymorphism?
    Code:
    pp<int>* pMemoryManagers = new tcpp<int>[nRange];
    tcpp inherits from pp, so the compiler allows it, but I don't like the idea of doing that.
    There also seems to be a problem with
    Code:
    pMemoryManagers[i] = pPointers[i];
    pMemoryManagers is tcpp and pPointers is int*. However, as tcpp derives from CMemoryManager (pp), it should have such an operator because pp does:
    Code:
    void operator = (T* pmm) throw(...)
    Which is also public.
    Or maybe it's true that operators aren't inherited.

    Here's another thing: when running out of memory, it's not possible to create and throw an exception. I could create a single exception object as a class member, but this makes the class bigger, of course.
    A global object might require locking, but this would fail again when running out of memory. So what's one to do, really? Throw a constant?
    Last edited by Elysia; 11-01-2007 at 08:39 AM.

  9. #39
    Registered User
    Join Date
    Sep 2006
    Posts
    835
    > I just wanted to copy something into the array and make sure there were no errors when deleting that array
    > later (by std::vector of course).

    With vectors, you have to make sure that the size (not just the capacity) is at least N+1 before you access v[N] (the (N+1)-th element). Unfortunately, since bounds checking isn't done by default, using reserve() to set the capacity only tends to prevent crashing, leading one to believe that one's program is correct (I made this mistake when first learning to use vectors). Remember, reserve() NEVER affects correctness. If a program is broken without it, then it's broken with it. If you're using GCC, try compiling with -D_GLIBCXX_DEBUG to do extra run-time checks including bounds checking, and watch the "fixed" problem reappear.

    http://gcc.gnu.org/onlinedocs/libstdc++/debug.html

  10. #40
    C++まいる!Cをこわせ!
    Join Date
    Oct 2007
    Location
    Inside my computer
    Posts
    24,654
    Using VC++ (2005).

  11. #41
    Registered User
    Join Date
    Sep 2006
    Posts
    835
    Quote Originally Posted by Elysia View Post
    Using VC++ (2005).
    I'm not familiar with that compiler, but you can use at() instead of [] to cause run-time bounds checking (for example, v.at(N) instead of v[N]). So this code will probably not complain, without the extra checks:
    Code:
    #include <vector>
    
    int main() {
      std::vector<int> v; // creates v with size 0
      v.reserve(1);
      v[0] = 0; // this element doesn't exist, since the size is still 0
    }
    but if you replace v[0] with v.at(0), it will definitely complain.

    Edit: Forgot std::. BTW, definitely try to find out if your compiler has something analogous to -D_GLIBCXX_DEBUG, if you don't want to use at() all the time.
    Last edited by robatino; 11-01-2007 at 09:36 AM.

  12. #42
    Cat without Hat CornedBee's Avatar
    Join Date
    Apr 2003
    Posts
    8,895
    Quote Originally Posted by Elysia View Post
    I cannot agree. From what I see, it makes the code more robust and safe at the expense of a little more overhead.
    Complex code is never more robust than simple code in the long run.

    Know any way to restrict this polymorphism?
    Code:
    pp<int>* pMemoryManagers = new tcpp<int>[nRange];
    Uh, oh, dangerous. You're right in that you want to restrict it, but unfortunately, it's not fully possible. It's simply a lesson that C++ programmers have to learn that array new and derivation don't mix. So don't do it, and educate any programmer you find doing it. Item 3 in More Effective C++, Item 89 in C++ Gotchas.
    You can partially protect yourself by declaring a class-specific array new and not implementing it. The code will fail at link time.

    Here's another thing: when running out of memory, it's not possible to create and throw an exception.
    Depends on the kind of memory you run out of. If you run out of stack memory, your program is killed, and that's it. If you run out of heap memory, new will throw a bad_alloc. So obviously it is possible to throw, at least to a point.
    Basically, if you run out of memory, you've got bigger fish to fry than what kind of exception to throw. Let new throw its bad_alloc.
    All the buzzt!
    CornedBee

    "There is not now, nor has there ever been, nor will there ever be, any programming language in which it is the least bit difficult to write bad code."
    - Flon's Law

  13. #43
    Cat without Hat CornedBee's Avatar
    Join Date
    Apr 2003
    Posts
    8,895
    OK, here's my smart pointer suite. It is syntax-checked, but I haven't tested it in any way, and it may still fail to compile when actually instantiated. Also, no implementation of atomic_add is provided, as this is highly system-specific. (InterlockedExchangeAndAdd() should be a good base for implementing it.)

    This provides four smart pointer classes. They are the four combinations of two basic decisions:
    1) Manage an array or a single object?
    2) Be thread-safe or not?

    There are no hooks to implement your map lookup. I remain firmly opposed to it and thus didn't accommodate it in my design.

    However, there are hooks to implement weak pointers. Weak pointers maintain a link to the object, but do not increase the reference count. If the last strong pointer goes away, the object is deleted, even if weak pointers to it remain. The weak pointers know of the object's destruction.
    In Boost's implementation, you cannot dereference weak pointers, but you can create strong pointers from them. Trying to create a strong pointer from an invalidated weak pointer throws an exception. The reason for this design is that it is unsafe to access the pointee if you don't have control over a strong pointer to it. If the weak pointer allowed direct access, the object could be deleted from under your feet.


    The most complicated thing about it is, in my opinion, the CRTP. However, it's a pattern, which means that once you understand it, it provides no complexity overhead. In addition, it provides zero space or runtime overhead in the program itself compared to directly implementing the functionality. Everything is fully inlineable.

    Edit: deleted attachment, has a bug. Added new version a few posts down.
    Last edited by CornedBee; 11-01-2007 at 10:41 AM.
    All the buzzt!
    CornedBee

    "There is not now, nor has there ever been, nor will there ever be, any programming language in which it is the least bit difficult to write bad code."
    - Flon's Law

  14. #44
    C++まいる!Cをこわせ!
    Join Date
    Oct 2007
    Location
    Inside my computer
    Posts
    24,654
    It's a shame because I wanted to throw my own specific exception class, but when running out of memory it just doesn't seem like it works.
    Newest header file attached.
    Changes:
    - Always thread safe.
    - One time critical and one non-time critical class. Uses virtual functions to determine if to call the time critical functions or not.
    - Not handling memory errors because if you run out of memory, the whole will likely crash.
    - Fixed some overloading.
    - Removed array support.
    - Removed CMemoryManagerDefault.
    - Reinstated PPNULL. Not possible to check against NULL, to construct a new NULL anymore (all checks should be done against PPNULL).
    Last edited by Elysia; 11-01-2007 at 10:27 AM.

  15. #45
    Cat without Hat CornedBee's Avatar
    Join Date
    Apr 2003
    Posts
    8,895
    It probably does, but you're probably following MFC convention and allocating the exception from the heap, right?
    All the buzzt!
    CornedBee

    "There is not now, nor has there ever been, nor will there ever be, any programming language in which it is the least bit difficult to write bad code."
    - Flon's Law

Popular pages Recent additions subscribe to a feed

Similar Threads

  1. Message class ** Need help befor 12am tonight**
    By TransformedBG in forum C++ Programming
    Replies: 1
    Last Post: 11-29-2006, 11:03 PM
  2. Direct3D problem
    By cboard_member in forum Game Programming
    Replies: 10
    Last Post: 04-09-2006, 03:36 AM
  3. My Window Class
    By Epo in forum Game Programming
    Replies: 2
    Last Post: 07-10-2005, 02:33 PM
  4. base class pointer problems
    By ... in forum C++ Programming
    Replies: 3
    Last Post: 11-16-2003, 11:27 PM
  5. Warnings, warnings, warnings?
    By spentdome in forum C Programming
    Replies: 25
    Last Post: 05-27-2002, 06:49 PM