Hello All,
I think tree structures are the best structures. Granted, they aren't always but when a tree is applicable, it's better than others.
Does anyone have examples proving otherwise or a favorite structure that they'd like to talk about?
Hello All,
I think tree structures are the best structures. Granted, they aren't always but when a tree is applicable, it's better than others.
Does anyone have examples proving otherwise or a favorite structure that they'd like to talk about?
It really depends on what you're doing. I find cyclical doubly-linked lists much more convenient ( and faster ) if I'm not doing any searches.
Devoted my life to programming...
Counter-example: if you are implementing a mapping from a key to a value, a (balanced binary) tree is applicable. However, depending on the characteristics of the expected data and the desired performance characteristics of certain operations, a hash table implementation may be better.Originally Posted by MutantJohn
EDIT:
Actually, your caveat effectively amends your own declaration from "tree structures are the best structures" to "tree structures are the best structures when they are applicable". You might as well amend that further to "tree structures are the best structures when they are the best for the job", but then that's not a particularly interesting reason to explain why you like them since it applies to pretty much every other data structure
Last edited by laserlight; 09-05-2013 at 12:40 PM.
Look up a C++ Reference and learn How To Ask Questions The Smart WayOriginally Posted by Bjarne Stroustrup (2000-10-14)
For example simple array is mapping of integer key in the range [0-n] to value, and as such mapping could be replaced by tree. But why bother?
All problems in computer science can be solved by another level of indirection,
except for the problem of too many layers of indirection.
– David J. Wheeler
Fine, let's assume my claim more ignorant and biased.
But trees have the advantage of being collapsable into a 1D or 2D array. I know most CUDA implementations of a tree use an array representation rather than the typical top-down view I picture when I draw trees.
Um, so what are you trying to say now?
Look up a C++ Reference and learn How To Ask Questions The Smart WayOriginally Posted by Bjarne Stroustrup (2000-10-14)
You know that arrays can basically store any data structure? If the data conforms to the heap property, it's a tree. You might use an array as a linked list if you can traverse it using data found in the contents. Arrays can contain flattened matrices, the values of a hash table, graphs, or, you know, it could just be an array.
And RAM is just an array of bytes, after all.
The cost of software maintenance increases with the square of the programmer's creativity. - Robert D. Bliss
XD
You guys are tough to troll
When I wrote this I was mostly thinking of trees as solutions to every problem and that no matter what, a tree is always the solution. Like, I think trie structures are a pretty neat idea and Delaunay trees are cool too. Octrees were also really popular in astrophysics simulations. And collision resolution likes them too though I think using triangles and barycentric coordinates is the superior way of solving them now so no more quadtree spatial refinement for that. And I think triangles fill space finer.
Nevertheless, I think trees are cool and in combination with vectors, I don't really see the point of anything else, tbh.
John, I can say that my thread for the Octree increased your excitement very much.
However, there is no optimal data structure. All data structures can be useful, if they are selected in the appropriate application. There is always a trade-off around them.
Code - functions and small libraries I use
It’s 2014 and I still use printf() for debugging.
"Programs must be written for people to read, and only incidentally for machines to execute. " —Harold Abelson
I love thinking about different ways to solve a particular problem. Data structures are just one tool, or component, or part, or way to approach.
I believe the ability to overcome ones natural instincts, and seriously considering how to approach a problem using different tools rather than your currently-favourite one, is something every good developer should strive at. Not for the sake of the tools, but to keep your mind working at problems from different angles.
That said, having interest in a specific tool -- be it code or method or algorithm or data structure or whatever -- makes it optimal, results-wise, to play with and practice that tool. The related emotions seem to enhance memory retention and deep understanding, and boost your self-esteem, too. The interest will wane over time, because we are only human. So, it's very important and efficient to play/train with things you're interested in and have fun with, rather than force yourself to work with tools you don't like.
MutantJohn, perhaps you should have asked something along the lines of
I really like how tree structures can be used to solve problems (perhaps noting a past thread or two as examples).
Are there any other data structures that are similarly well suited to specific types of problems? Any favourites? Any that open up classes of solutions/problems like trees did for me?
If you had, then someone might have mentioned hashing (or hash tables in particular) and disjoint sets.
See Disjoint set data structure at Wikipedia for details.
Consider a problem where you have an image, and you wish to know how many areas there are in the image. You only need one function, say similar(pixel1, pixel2), which returns logical true if and only if the two neighboring pixels are in the same area.
Initially, you create a disjoint set of each pixel in the image, with each pixel in their own set. You pass over the entire image, comparing each neighboring pixel pair only once, and if they are in the same area, join the two sets.
The data structure tells you exactly which area each pixel belongs to. Counting the unique areas is just a pass over the disjoint data set; it will then tell you exactly how many separate areas there are in the image. In the image case, you can also calculate e.g. the centroid of each area at the same time, which is very useful.
Typical implementation for the disjoint data set is basically a single pointer (or index) for each element, initialized to point to itself. Joining two elements is done by first traversing the chains for both elements (till they point to themselves), and set one root point to the other. (Usually a path compression step is done to optimize amortized cost, as the Wikipedia article describes.)
Another common use is in games, where you have a number of "rooms", and you are interested in whether you can get from "room" A to "room" B. You construct and maintain the disjoint set based on neighboring rooms, updating the structure if doors are opened. You only need to probe the disjoint set entries for both rooms to find out if you can get from one to the other.
You can also augment the disjoint set elements by approximate distance, and join and path-compress in a way that minimizes the distances. This allows you to do stuff like proper sound propagation and attenuation from a static source (the data structure needs to be regenerated if the source or any of the sources move).
(In games, having such a structure for each room, with the player at the source, regenerating whenever player enters a new room, allows very efficient implementation of realistic (player scent, player noises) monster triggers.)
That gives me a nice idea about implementing Kruskal's MST algorithm.
I can store each subtree as disjoint sets, and adding new edges to a set if it remains a tree (easy to check E==V-1) or if the edge has its ends in two different sets (thus...'union' from the wiki article).
This seems simpler than checking for a cycle within each iteration.
Is this a known/commonly used method of implementation?
If not, is there a reason this isn't feasible?