Good or bad?Code:std::vector<BYTE> v; v.push_back(0); v.reserve(10000); BYTE& p = v[0]; ZeroMemory(&p, 10000);
Good or bad?Code:std::vector<BYTE> v; v.push_back(0); v.reserve(10000); BYTE& p = v[0]; ZeroMemory(&p, 10000);
Last edited by Elysia; 10-31-2007 at 05:55 PM.
I just missed it when replying.
Anyway, I rewrote some code to use a vector instead of the memory manager. However, I shouldn't blame myself if I get lots of memory errors / faults, since the code was written with C-style buffers in mind, using pointers and keeping track of when needing to resize and so on.
Deprecated array support for class. Will remove if there are no prolems. Now I'm thinking of going to bed because I'm tired.
Fun discussing with you all, though.
Last edited by Elysia; 10-31-2007 at 06:22 PM.
Assuming ZeroMemory() is zeroing elements in an array-based fashion (p[0] = 0, p[1] = 0, ...), you should use resize instead of reserve, since reserve only sets v's capacity, not its size (or just use the constructor
which creates v with size 10000, eliminating the need for the reserve). Personally I would have written this asCode:std::vector<BYTE> v(10000);
since using push_back() exclusively takes care of the size automatically (you don't ever actually need reserve(), it's just an optimization and never affects correctness).Code:std::vector<BYTE> v; v.reserve(10000); for (int i=0; i<10000; ++i) v.push_back(0);
Edit: I'm not sure whether the 1-argument constructor mentioned above is guaranteed to zero-initialize the elements the way you want - if it is, you could just write
and be done with it. Anyone know?Code:std::vector<BYTE> v(10000);
Last edited by robatino; 11-01-2007 at 02:37 AM.
So make them. My point was that you shouldn't be dealing with const T* in your class.
True. I was just pointing out that the reduced maximum of LONG vs DWORD isn't realistically any obstacle.Ah well, but as the saying goes: What isn't broken, don't fix. It can remain unsigned and the critical section will take care of the locking, so it shouldn't be a problem.
Wrong. You don't. You can use base classes for implementation sharing. shared_ptr and shared_array do it.No. Good. Polymorphism is good. If I don't use Virtuals, I'll have to tear the class apart and re-write already existing code when creating new ones. THAT is bad.
Polymorphism is for when you actually have a reference to the base class. But an implementation-sharing base class should never be referenced. It should be an implementation detail.
Polymorphism is for runtime decisions.
std::copy() is as safe as anything. std::uninitialized_copy() if the target area doesn't have objects in it yet.BTW, there's no C++ safe copying method? memcpy and CopyMemory are obviously bad.
But the mere fact that p is set ought to be enough information.Tut-tut. m_PInfoTemp caches the lookup for pointer p so it doesn't have to look up twice or more if it's going to lookup p again.
Also, it again comes down to having a single place where lookup is done. And by that I don't mean just a single function, because you can still call the function multiple times. I mean that, in the flow chart of pointer construction/assignment, the lookup is only done once.
The standard idiom for getting a null smart pointer is to default construct it, i.e.It doesn't work like it should:
Remove explicit and it works. But it also works to return a bool >_< Which I why I used PPNULL in the first place.Code:pp<CRect> foo() { return NULL; // error C2664: 'CMemoryManager<T>::CMemoryManager(const CMemoryManager<T> &)' : cannot convert parameter 1 from 'int' to 'const CMemoryManager<T> &' }
return CMemoryManager<T>();
explicit only has an effect on constructors that can be called with one argument. The keyword is completely ignored otherwise. So you're not in any way safe.Better safe than sorry.
All the buzzt!
CornedBee
"There is not now, nor has there ever been, nor will there ever be, any programming language in which it is the least bit difficult to write bad code."
- Flon's Law
I just wanted to copy something into the array and make sure there were no errors when deleting that array later (by std::vector of course).
But I do not see why not. If the functions do not change the T*, why not const? If they try to assign it somewhere, obviously it should not be const. So far as I'm aware, const is to catch design time bugs mostly.
Then explain exactly how I can implememt a thread-safe class without re-writing the code. The thread-safety is done is protected functions done in the base class. Merely overriding them in the derived class won't get me anywhere, not unless I make them virtual.Wrong. You don't. You can use base classes for implementation sharing. shared_ptr and shared_array do it.
Polymorphism is for when you actually have a reference to the base class. But an implementation-sharing base class should never be referenced. It should be an implementation detail.
Polymorphism is for runtime decisions.
Without that, I don't see how to do it without re-writing the entire darn class, which is not something I want to do.
I don't feel that's a good idea. To make the flow chart dynamic is better, since it allows me to do what must be done if it hasn't been done already. Dynamic flowchart also decreases the chance for example, to access a NULL pointer (because the lookup function wasn't called).But the mere fact that p is set ought to be enough information.
Also, it again comes down to having a single place where lookup is done. And by that I don't mean just a single function, because you can still call the function multiple times. I mean that, in the flow chart of pointer construction/assignment, the lookup is only done once.
It's also what I've built the class upon - dynamic flowchart. Determine what needs to be done and do it. So simple, yet so effective.
Sounds better to me to use PPNULL.The standard idiom for getting a null smart pointer is to default construct it, i.e.
return CMemoryManager<T>();
And not even a warning from the compiler. How selfish of it.explicit only has an effect on constructors that can be called with one argument. The keyword is completely ignored otherwise. So you're not in any way safe.
Because it's not the pointer's responsibility to ensure that the pointee doesn't change. Or in other words, you don't need const T inside your pointer class because you have the design error the moment you dereference the pointer, not the moment you change it. The only exception is the pointer's own operator* overload.
It comes down to the pointer behaving like a raw pointer where it matters. A raw pointer doesn't care about whether its pointee is const or not. That's for the user of the pointer to worry about. The pointer just points and carries the constness along unchanged.
Your pointer changes the constness, and that's bad.
Gimme an hour (but it may be interrupted by lunch), and I'll show you.Then explain exactly how I can implememt a thread-safe class without re-writing the code. The thread-safety is done is protected functions done in the base class. Merely overriding them in the derived class won't get me anywhere, not unless I make them virtual.
Without that, I don't see how to do it without re-writing the entire darn class, which is not something I want to do.
A static flowchart approach allows you to construct reliable flowcharts where these checks aren't necessary except as debug-mode asserts. Thus, your pointer is reliable without the runtime overhead of checking whether it's done its thing already.I don't feel that's a good idea. To make the flow chart dynamic is better, since it allows me to do what must be done if it hasn't been done already.
Your approach is basically, write potentially buggy code, but instead of finding the bugs and solving them, you look for the effects of the bugs and solve those. You're curing the symptoms, not the illness.
A static flowchart allows you to analyze at design time if a NULL pointer can be accessed, without any runtime overhead. Your dynamic approach makes this considerably harder.Dynamic flowchart also decreases the chance for example, to access a NULL pointer (because the lookup function wasn't called).
No, it's neither simple nor effective. It's complex in implementation, and leads to code repetition and unnecessary performance overhead, while not increasing reliability.It's also what I've built the class upon - dynamic flowchart. Determine what needs to be done and do it. So simple, yet so effective.
*shrug* Matter of habit, I suppose.Sounds better to me to use PPNULL.
Note that the idiom is consistent, though:
Code:typedef int* pint; pint p = pint(); assert(p == 0);Well, why would it warn you? Compilers only generate warnings for things that are potentially problematic.And not even a warning from the compiler. How selfish of it.
All the buzzt!
CornedBee
"There is not now, nor has there ever been, nor will there ever be, any programming language in which it is the least bit difficult to write bad code."
- Flon's Law
Sure. Looking forward to that.
I cannot agree. From what I see, it makes the code more robust and safe at the expense of a little more overhead.A static flowchart approach allows you to construct reliable flowcharts where these checks aren't necessary except as debug-mode asserts. Thus, your pointer is reliable without the runtime overhead of checking whether it's done its thing already.
Your approach is basically, write potentially buggy code, but instead of finding the bugs and solving them, you look for the effects of the bugs and solve those. You're curing the symptoms, not the illness.
A static flowchart allows you to analyze at design time if a NULL pointer can be accessed, without any runtime overhead. Your dynamic approach makes this considerably harder.
No, it's neither simple nor effective. It's complex in implementation, and leads to code repetition and unnecessary performance overhead, while not increasing reliability.
Here's a good one:
Know any way to restrict this polymorphism?
tcpp inherits from pp, so the compiler allows it, but I don't like the idea of doing that.Code:pp<int>* pMemoryManagers = new tcpp<int>[nRange];
There also seems to be a problem with
pMemoryManagers is tcpp and pPointers is int*. However, as tcpp derives from CMemoryManager (pp), it should have such an operator because pp does:Code:pMemoryManagers[i] = pPointers[i];
Which is also public.Code:void operator = (T* pmm) throw(...)
Or maybe it's true that operators aren't inherited.
Here's another thing: when running out of memory, it's not possible to create and throw an exception. I could create a single exception object as a class member, but this makes the class bigger, of course.
A global object might require locking, but this would fail again when running out of memory. So what's one to do, really? Throw a constant?
Last edited by Elysia; 11-01-2007 at 08:39 AM.
> I just wanted to copy something into the array and make sure there were no errors when deleting that array
> later (by std::vector of course).
With vectors, you have to make sure that the size (not just the capacity) is at least N+1 before you access v[N] (the (N+1)-th element). Unfortunately, since bounds checking isn't done by default, using reserve() to set the capacity only tends to prevent crashing, leading one to believe that one's program is correct (I made this mistake when first learning to use vectors). Remember, reserve() NEVER affects correctness. If a program is broken without it, then it's broken with it. If you're using GCC, try compiling with -D_GLIBCXX_DEBUG to do extra run-time checks including bounds checking, and watch the "fixed" problem reappear.
http://gcc.gnu.org/onlinedocs/libstdc++/debug.html
Using VC++ (2005).
I'm not familiar with that compiler, but you can use at() instead of [] to cause run-time bounds checking (for example, v.at(N) instead of v[N]). So this code will probably not complain, without the extra checks:
but if you replace v[0] with v.at(0), it will definitely complain.Code:#include <vector> int main() { std::vector<int> v; // creates v with size 0 v.reserve(1); v[0] = 0; // this element doesn't exist, since the size is still 0 }
Edit: Forgot std::. BTW, definitely try to find out if your compiler has something analogous to -D_GLIBCXX_DEBUG, if you don't want to use at() all the time.
Last edited by robatino; 11-01-2007 at 09:36 AM.
Complex code is never more robust than simple code in the long run.
Uh, oh, dangerous. You're right in that you want to restrict it, but unfortunately, it's not fully possible. It's simply a lesson that C++ programmers have to learn that array new and derivation don't mix. So don't do it, and educate any programmer you find doing it. Item 3 in More Effective C++, Item 89 in C++ Gotchas.Know any way to restrict this polymorphism?
Code:pp<int>* pMemoryManagers = new tcpp<int>[nRange];
You can partially protect yourself by declaring a class-specific array new and not implementing it. The code will fail at link time.
Depends on the kind of memory you run out of. If you run out of stack memory, your program is killed, and that's it. If you run out of heap memory, new will throw a bad_alloc. So obviously it is possible to throw, at least to a point.Here's another thing: when running out of memory, it's not possible to create and throw an exception.
Basically, if you run out of memory, you've got bigger fish to fry than what kind of exception to throw. Let new throw its bad_alloc.
All the buzzt!
CornedBee
"There is not now, nor has there ever been, nor will there ever be, any programming language in which it is the least bit difficult to write bad code."
- Flon's Law
OK, here's my smart pointer suite. It is syntax-checked, but I haven't tested it in any way, and it may still fail to compile when actually instantiated. Also, no implementation of atomic_add is provided, as this is highly system-specific. (InterlockedExchangeAndAdd() should be a good base for implementing it.)
This provides four smart pointer classes. They are the four combinations of two basic decisions:
1) Manage an array or a single object?
2) Be thread-safe or not?
There are no hooks to implement your map lookup. I remain firmly opposed to it and thus didn't accommodate it in my design.
However, there are hooks to implement weak pointers. Weak pointers maintain a link to the object, but do not increase the reference count. If the last strong pointer goes away, the object is deleted, even if weak pointers to it remain. The weak pointers know of the object's destruction.
In Boost's implementation, you cannot dereference weak pointers, but you can create strong pointers from them. Trying to create a strong pointer from an invalidated weak pointer throws an exception. The reason for this design is that it is unsafe to access the pointee if you don't have control over a strong pointer to it. If the weak pointer allowed direct access, the object could be deleted from under your feet.
The most complicated thing about it is, in my opinion, the CRTP. However, it's a pattern, which means that once you understand it, it provides no complexity overhead. In addition, it provides zero space or runtime overhead in the program itself compared to directly implementing the functionality. Everything is fully inlineable.
Edit: deleted attachment, has a bug. Added new version a few posts down.
Last edited by CornedBee; 11-01-2007 at 10:41 AM.
All the buzzt!
CornedBee
"There is not now, nor has there ever been, nor will there ever be, any programming language in which it is the least bit difficult to write bad code."
- Flon's Law
It's a shame because I wanted to throw my own specific exception class, but when running out of memory it just doesn't seem like it works.
Newest header file attached.
Changes:
- Always thread safe.
- One time critical and one non-time critical class. Uses virtual functions to determine if to call the time critical functions or not.
- Not handling memory errors because if you run out of memory, the whole will likely crash.
- Fixed some overloading.
- Removed array support.
- Removed CMemoryManagerDefault.
- Reinstated PPNULL. Not possible to check against NULL, to construct a new NULL anymore (all checks should be done against PPNULL).
Last edited by Elysia; 11-01-2007 at 10:27 AM.
It probably does, but you're probably following MFC convention and allocating the exception from the heap, right?
All the buzzt!
CornedBee
"There is not now, nor has there ever been, nor will there ever be, any programming language in which it is the least bit difficult to write bad code."
- Flon's Law