Depends on how you define remove. It could be "remove given a pointer to the element you want removed".
Printable View
This seems a quality of implementation issue, but I'm not entirely sure I understand what you are asking.Quote:
Are those complexities not dependent on another variable, such as the length of the input. Eg. if we insert strings into it, is the complexity not dependent on the length?
No. A hash table implementation with even simplified amortized constant time operations could easily outperform an AVL tree if the use case implies rare mutation after an initial setup phase. It could also be the case that the AVL tree would be faster in the very same scenario if the comparison of objects is cheap while hashing those objects would be expensive or impossible to plan (user generated strings of widely varying length and content).Quote:
But if that is the case, wouldn't it make his container always slower than the AVL tree?
*shrug*
It could also be the case that a B*tree would be the fastest option. It really depends on what you are doing and not on what academic "Big O" operations a data structure may have.
He said both operations are related to the expectations for hash tables.Quote:
If find is O(n), I don't see how remove can be O(1)
Soma
That doesn't really change the nature of the statement.Quote:
It could be "remove given a pointer to the element you want removed".
Removing an element of a singly linked list even when given an iterator is still an "O(n)" operation.
The relationship is going to depend on the data structure and the underlying implementation.
Soma
I've learned from various places that hash tables have O(1) finding time, and this is a common misconception apparently. Hash tables are O(n), which means my container is O(n).
Cyberfish is right, once an element is found in my container, removing it is indeed O(1), it's one of the weird properties of my container but this is true. If you consider the removal operation to include finding the element first, then it's O(n), but there's other ways to get an iterator than by searching for it.
Hash tables are only O(n) when they need to find an element due to collisions at some subscript or otherwise. With perfect hashing look up in O(1) time is assured, but in general you should choose a hashing algorithm that is close enough to O(1) results anyway; since most published algorithms pass the avelanche test.
The weird thing about what you've talked about is that you haven't really described the container at all and you only state that it has so many properties. That's great.
Keep in mind your algorithm and container need serious peer review to validate the claims you are making about it. This would be a 'kind' place to get such a review. The members who have answered you thus far in this thread are very intelligent programmers and computer scientists. You would do well to listen to them and allow them to review your algorithm.
My thinking is you won't allow any of them to review it b/c you know deep down they might find fault with it. I wouldn't worry about that b/c it would be better to find fault with it or perhaps validate it now than later. If you are so sure that it is a great container then you should not be concerned about allowing people to see it. If you are concerned with people stealing it then put a copyright on the code and/or use an open-source type license and then allow people to review it. Honestly no one here is going to steal your code or your idea. I'm a firm believer that there are not that many 'original' algorithms or containers left to be built so I'm anxious to see you disprove that.
That's expected.
Being that the case, that's the one that matters on a worst case scenario, which is the whole point of the Big O notation. Otherwise, if you feel like pointing out there's best case scenario, you need to separate deleting operations into the two realities.
Since we know now search is O(n), I dare not ask
Exactly. How about that guy that went public finding a time traveller in some old movie (charlie chaplin or so, heck I'm too young for this), because a person would be speaking on a mobile phone. It was pointed out quite quickly on slashdot that the device was in fact (likely) a hearing aid of that time, not a mobile phone. Boy, must that guy have felt stupid.
Anyways, phantomotap, yes I understand the container might still be useful if we're talking strictly about worst case. We need average case to be able to tell anything about this container, but it sounds like he simply reinvented hashtables. In that case, his original comparison with the AVL tree is completely useless. It's like: "Look, I'm way faster than this specific container in worst case", only to move onto "Okay, that container is better in worst case". I am not going to compare it with other containers than he did in other cases (average, for instance) just because his original comparison was wrong.
About the "Are those complexities not dependent on another variable, such as the length of the input. Eg. if we insert strings into it, is the complexity not dependent on the length?" you didn't understand: I was thinking tries, where the complexity is independent of the number of elements in the container. I thought he may have used a similar idea, but missed out on the length, even though it was important, in the complexities.
So, Verdagon, you've made a few wrong claims already. If you want to be taken serious about this still you'll have to show some proof. At least make a new comparison; show us the average case of your container vs. hash tables.
Mario F., O(1) removal is not always the case in containers, when the iterator is known: trees might have to be re-balanced, all the elements in an array may have to be shifted left one field, etc.
That's me, but you never know when someone might manage something great even if it is just a variation on a theme.Quote:
I'm a firm believer that there are not that many 'original' algorithms or containers left to be built so I'm anxious to see you disprove that.
I see. When he said that about his container being related to hash tables I dismissed everything other than a hash table with a unique schedule using a B+Tree for the bins.Quote:
I was thinking tries, where the complexity is independent of the number of elements in the container.
Soma