Does heap memory that they said in programming discussion is:
http://en.wikipedia.org/wiki/Heap_(data_structure)
And also stack in _alloca?
http://en.wikipedia.org/wiki/Stack_(data_structure)
Printable View
Does heap memory that they said in programming discussion is:
http://en.wikipedia.org/wiki/Heap_(data_structure)
And also stack in _alloca?
http://en.wikipedia.org/wiki/Stack_(data_structure)
No that is a heap datastructure, and has nothing (directly) to do with Heap memory.
Heap memory is just a chunk (or even multiple chunks) of memory that is given to the application - malloc or new then carves this up into smaller sections that are given back to the application for it's use. You can make a "simple heap" by just declaring a large array and then letting your own version of alloc return some part of the array, and free will allow the passed in part of array to be reused.
And the stack is not the same either.
--
Mats
Do you know what is the meaning of "Θ(lg n) or Θ(1)"?
I don't understand what are they wrote on the page with those symbols...
Before presenting the table that uses symbols which you do not understand, that article links to another article on computational complexity. Read that other article.Quote:
Originally Posted by audinue
EDIT:
okay, but you will have to follow yet another link to an article on "Big O" notation.
At Comparison of theoretic bounds for variants in Heap(data structure)
Θ(lg n) or Θ(1)?
Θ == speed/memory/something?
lg == log?
n == ?
Big-O notation indicate "the number of operations". n is "the number of items". lg n certainly means log(n). O(1) means "one operation" - for example, adding an element to the end of a linked list is O(1) if you have a pointer to the end you are adding at. If you have to walk the linked list, it becomes O(n). A binary search in an array is O(log2(n)).
Simple forms of sorting (which consist of two nested for-loops) are O(n^2).
Note however that this is just a "overall efficency" number - so you can't directly compare it. For example, if we store hash-values of strings in an array (sorted in hash-order), and compare it to a binary search in a straightforward array of strings, the O number will be the same. But comparing (long) strings is much slower than comparing a single integer.
So O numbers are really only "an indication of what happens when n grows" - do if you double the number of items, does it take twice as long, a tiny bit longer or MUCH longer.
--
Mats
And yes, lg == log2. It looks like you've got Θ, which is Big-Theta notation. Big O is just an upper bound; Big Theta is both an upper bound and a lower bound (meaning it's right on).
As I understand it, if we can say "It always takes this long," then we have a procedure that takes place in constant time. You are correct in that a stricter notation will establish guarantees, but wikipedia says this about the matter:
There is also no guarantee about "it could take this long". Such a guarantee is an upper bound. Perhaps, "we've never seen anything perform worse before". That's probably not right either but Big O really offers nary a guarantee except how the work will scale with n.Quote:
Originally Posted by wiki