You're confusing const pointers with pointers-to-const. It's the same for smart pointers as for raw pointers: there's a difference between "CFoo const *" and "CFoo * const", and there's a difference between "smart_pointer<CFoo const>" and "const smart_pointer<CFoo>".
Originally Posted by Elysia
Look at it this way: on a 32-bit system, your class is currently probably 7*sizeof(void*) = 28 bytes large. To have the refcount rise to 2^31 (the overflow point), you'd need 2^31 smart pointers occupying 2^31 * 28 bytes, that's 2^32 * 7 bytes, and that's more than a 32-bit CPU is capable of addressing. So yes, you do know.
Shrug. You never know.
In the 64-bit case, it might work in theory. Your class is 56 bytes large (that's huge, by the way - another reason to get rid of all that runtime stuff; a refcounting smart pointer should be 16 bytes large), which is 2^33 * 7 bytes. This does fit into the 2^40 bytes an Athlon64 can physically address.
However, the stack is still limited way below that even in 64-bit systems. And these smart pointers are intended for stack usage - you're actually trying to block heap allocation.
Bzuh? If you followed my advice, you'd be using Boost's smart pointers and not writing a single line of code.
I'm not going to have 1000 classes that does the same thing. Splitting the class would be difficult or very tiresome as I'd have to rewrite a lot of code (since calling Release calls InternalDelete which calls InternalDetach, which Locks, deletes memory and removes pointer from map).
No. Bad. You can use base classes for implementation sharing, but no virtuals. Virtuals are for runtime decisions. You should get rid of those.
Ah, unless I make virtual functions and override those with non-locking and non-pointer looking-uping.
1) You shouldn't be assigning a lot of pointers to the class a lot of the time. If you do, you've got a logical problem in your code. You're busy juggling pointers when you should be doing work.
Not nonsense in my view. Try assigning a lot of pointers to the class a lot of the time and you'll see a performance hit.
2) What the hell are you doing that takes so long? Oh, I know. You're looking stuff up in your various global maps.
My point exactly.
No, that was a bug. Initially, I just wanted to delete m_pInfo, but later realized that I had to delete m_pInfo->pdwRefCount, so I made a new function to handle that and called it but forgot to remove delete m_pInfo.
It's a necessary consequence of encapsulation. You don't know the internals of a class. You can't know if it's safe to copy it to someplace else. The class might be holding a pointer to its own members. (Objects using the small buffer optimization often do that.) You have to call the class's copy constructor to create an instance somewhere else. And because realloc is a C function, it doesn't know that. It doesn't even know that such things as copy constructors exist.
That's really stupid. I don't see why I can't use realloc, though. Even though you disagree, it can work, as I have used it in the past and is has worked fine. I don't know the implications of doing it, however.
For that matter, you don't know if operator new is even implemented in terms of malloc. An implementation is free not to do so. A programmer is free to override the implementation's default implementation. Calling realloc assumes that malloc allocated the block in the first place.
It's pretty much a time bomb waiting to blow up in your face.
It simply doesn't work that way. Instead of looking up twice (I agree that could be done with arguments), but caching wouldn't work that way.
No chance. The standard won't break backwards compatibility, and there's actually code out there that relies on these conversions.
Now that's something that should make it into the standard: no weird automatic conversions. If a function takes a bool, then it takes a bool, end of story. Not some int or any other kind of data type that the compiler can think of making up.