Calculations with pointers where the type is irrelevant.
"The Internet treats censorship as damage and routes around it." - John Gilmore
Picky, picky. It's just another way to manage memory, I'm afraid.
Ideally the garbage collector sweeps when something no longer has a reference, so henceforth any performance penalty occurs when things fall out of scope. Such things are short lived and frankly similar things happen when you have to do it yourself. delete has to manage the memory pool, that's about as much a time waster as any garbage collector.
Whatever, though. If you're comfortable doing it all yourself, go right ahead. But it only needs to be fast enough. Meet your specs. If what you originally wrote isn't good enough, then your world does not fall apart: I bet it ain't the memory management that's slowing you down.
Perhaps. But it's also about one big run with lots of CPU cleaning up, vs small bits and peaces here and there.
Plus garbage collection is usually slower with destructors.
And then it's the issue of memory. Garbage collectors kick in when they think enough memory has been used, while the typical manual approach frees memory directly when it's no longer needed.
And it's just not about garbage collection, but the runtime checks & whatnot that further slows things down.
These are the biggest issues I have against dotNet.
> Perhaps. But it's also about one big run with lots of CPU cleaning up, vs small bits and peaces here and there.
Would it not be better to free things when your program is in idle and not doing anything, rather than during some intensive task? I'd think so.
> Garbage collectors kick in when they think enough memory has been used, while the typical manual approach frees memory directly when it's no longer needed.
In the old'n days perhaps... VMs and company have come a long way in recent times, perhaps you need to brush up on your knowledge if you're going to knock them all the time.
And gobble up huge amounts of CPU? No thanks.
And what if it does not idle, and it memory intensive?
I prefer not to. I know garbage collectors are not my thing, but I don't mean to steer anyone else away from it.> Garbage collectors kick in when they think enough memory has been used, while the typical manual approach frees memory directly when it's no longer needed.
In the old'n days perhaps... VMs and company have come a long way in recent times, perhaps you need to brush up on your knowledge if you're going to knock them all the time.
Malloc/free (aka delete/new) don't actively manage memory. You use malloc - it finds you an open slot, you use free - it marks that slot as available. But the garbage collector actively searches through the entire memory at all times, trying to find non-referenced junks of memory. That's evil.
Freeing/allocating memory is not an intensive task - memory pages are just marked either committed or not. The massive initialization/deinitialization different frameworks like to do is what makes it intensive.
Last edited by maxorator; 12-01-2008 at 04:15 AM.
"The Internet treats censorship as damage and routes around it." - John Gilmore
Nonsense. Exact, generational garbage collectors do nothing of the kind. Not even conservative mark&sweep collectors like those used in C++ (e.g. Boehm) search all the time.But the garbage collector actively searches through the entire memory at all times
All the buzzt!
CornedBee
"There is not now, nor has there ever been, nor will there ever be, any programming language in which it is the least bit difficult to write bad code."
- Flon's Law
Well, I guess it depends on the implementation. But if you don't "free" memory, then it would have to search through all allocations, looking for ones that aren't used anymore and free them. So it would actively be searching, although not all of the time, only when it kicks in and begins to collect.
Still, I am not fond of such a thing.
I'm not talking about garbage collector libraries, I'm talking about the builtin garbage collectors in some languages. Imagine that someone allocates huge junks of memory and references them with each other, makes many complex structures inside that memory. Then suddenly, the code removes references to most parts of that memory - it makes no calls to memory allocation/deallocation functions, just sets a few things to zero. Garbage collector finds that and deallocates nonreferenced memory. How else could it find those without active search?
"The Internet treats censorship as damage and routes around it." - John Gilmore
And why do you expect that to be any different from the best available garbage collectors? Builtin garbage collection should be AT LEAST as efficient as library based garbage collection - it is obviously not as efficient as directly telling the C library which bits of memory are needed and which are not - and depending on the circumstances, it can be pretty darn inefficient, but it's not a "scan everything all the time" - there are more clever methods to solve the problem (involving reference counts, trees and other clever stuff).
--
Mats
Compilers can produce warnings - make the compiler programmers happy: Use them!
Please don't PM me for help - and no, I don't do help over instant messengers.
In fact, builtin garbage collectors are better than library collectors, because they have more information they can use.
Let's look a generational, exact, compacting garbage collector like it is used in .Net. Here's how it works:
1) When the allocator is low on memory, it will start a Gen0 sweep. Generation 0 is a pretty small (a few hundred k perhaps) memory area where all new objects are placed. Because the GC is exact, it knows exactly which memory locations might contain references and looks only at those. Because most objects are very short-lived, chances are that most of generation 0 will be freed at this point. This sweep is very fast.
2) The objects not collected during a Gen0 sweep are compacted and promoted to generation 1.
3) If generation 1 exceeds a certain size, a Gen1 sweep is initiated. This works exactly like a Gen0 sweep, but generation 1 will be larger. What survives is promoted to generation 2.
4) Generation 2 is the highest generation. It is swept very rarely. Most things that survive to be promoted to Gen2 aren't freed until program shutdown anyway, or at least until completion of a lengthy operation.
5) Very large objects would mess up this scheme by filling a generation too easily. They are handled completely separately, with a classical free list scheme similar to classical allocators.
The key points here are:
1) Most of the time, only Gen0 is swept. This is a very small area. So it doesn't search the entire memory.
2) The collector is exact. It knows where the pointers are. It doesn't need to search memory for pointers. It can follow the chain of pointers and mark each object it encounters as live.
3) Due to compaction, allocation is extremely fast. It's nearly as fast as stack allocation. Stack allocation is one pointer addition. Unless a sweep is initiated, managed allocation is one pointer comparison (is gen0 full?) and one pointer addition.
All the buzzt!
CornedBee
"There is not now, nor has there ever been, nor will there ever be, any programming language in which it is the least bit difficult to write bad code."
- Flon's Law
And there are some (or many) GC for C and C++. The people that made them explain how their GC can be more beneficial than allocating/deallocating memory. They also explain how their GC can slow things down in some occations. So it depends on what you are doing.
GC are getting better and better, so they have potential, that is all to say