I'm not really trying to punch a hole through the whole theory or other arguments, just lying out my view of the whole.
Originally Posted by CornedBee
And regardless, a bunch of conditions (even if just one) and pointer assignment is extra work, however small.
Of course there is. I'm not talking about a massive slow-down or so, but there must be some cpu time dedicated to managing the generations. Ie deciding to what generation it belongs. Deciding which generation to sweep, etc.
No, and no. Allocation is not slowed down by generations at all. Deallocation isn't slowed down at all. Sweeping isn't slowed down at all. Compaction is the slowest part, but compacting isn't there because of generations; it's there because it's what allows the fast allocation and because it prevents memory fragmentation. Also, in the two typical cases (determined by the designers; I cannot vouch for their data collection methods), which are that nearly no objects survive and that nearly no objects are deleted, compaction is fast, too.
And add compaction to that list of expensive cpu cycles, too. Yes, it can be good. I do understand the principle and fragmentation, but sometimes it can be unnecessary, too. That is a trade-off, I suppose, so I would like to see some options to control that.
That's good and all - but you admitted it. It's slower, even if not by much.
Yes. It takes linear time. Interestingly enough, though, it really doesn't take all that much time in comparison to managing heap data structures. N deallocations with M active memory blocks in a free list tree take N*logM time. N deallocations in a GC system take M time, but with M bounded by the size of Gen0, not all allocations.
It sounds to me, after all this, that is might just be slower. Even if just a little, still slower.
I still suggest that a modern GC is faster than naive manual memory management in many, if not most, cases. By naive I mean that nothing beyond the native general purpose allocator is used.
Of course, this is speculation, but just as you suggest it is faster, I suggest it is slower.
But I actually might think that it might be faster or it might be slower, depending on the area of use.
Doesn't make me a fan, though :/
And that effort should rightfully be reduced. It is our responsibility to make the most out of your applications with C++, since it is a highly flexible and fast language, so I would dearly see tools for that appear.
You can always optimize manual memory management, something which is difficult (but not impossible) to do with a GC. You can use memory pools for mass deallocation, simple segregated storage for things like node-based containers, and all the other tricks to make memory use more efficient and less fragmented. But all this goes with significant effort.
Thinking of dotNet makes me think of VBA, which completely disgusts me with its speed. It's slow as a crawl!