# List copy semantics efficiency

Show 80 post(s) from this thread on one page
Page 1 of 3 123 Last
• 05-21-2008
dudeomanodude
List copy semantics efficiency
This never dawned on me before:

Typically ( as far as I know ) people handle linked-list copying by iterating the list, and pushing back the current dereferenced iterator ( at least that's how I've always done it ).

1) But doesn't that become O(logN)?

2) Wouldn't iterating the list once and pushing the front, then reversing the order be more efficient ( because isnt' that O( 2 * N )? )

3) Any other ways you can handle this more efficiently?

theDude
• 05-21-2008
dudeomanodude
I'll also throw out there:

Starting at the end(), iterating to the begin(), pushing_front as we go along ( granted we have a doubly linked list ).

That should be O(N) which is the best we can expect from a linked-list, right?
• 05-21-2008
matsp
It would always be no better than O(n) - you can of course make it less efficient for example by reversing the list or some such.

--
Mats
• 05-21-2008
dudeomanodude
Quote:

Originally Posted by matsp
It would always be no better than O(n) - you can of course make it less efficient for example by reversing the list or some such.

--
Mats

But what if you have a singly linked list? I can't figure out how to get better than O( 2N ) on a singly linked list...

...I'm guessing that's why ( among other reasons ), the STL provides a doubly-linked list.
• 05-21-2008
laserlight
Quote:

But what if you have a singly linked list? I can't figure out how to get better than O( 2N ) on a singly linked list...
Maintain two iterators: one to the previous element, another to the current element. Once you have instantiated the current element, create the link, then push back the previous iterator.
• 05-21-2008
medievalelks
Quote:

Originally Posted by dudeomanodude
1) But doesn't that become O(logN)?

2) Wouldn't iterating the list once and pushing the front, then reversing the order be more efficient ( because isnt' that O( 2 * N )? )

O(log N) scales better than O(N)...
• 05-21-2008
dudeomanodude
Quote:

O(log N) scales better than O(N)...
Oh yea, what i meant to say was O( N^2 ) -much worse
• 05-21-2008
brewbuck
Quote:

1) But doesn't that become O(logN)?
No -- it's O(N^2)

Quote:

2) Wouldn't iterating the list once and pushing the front, then reversing the order be more efficient ( because isnt' that O( 2 * N )? )
It's just O(N). O(2*N) is the same as O(N).

Anyway, you're assuming that pushing at the back of the list is inefficient. This only requires a list traversal if you do it naively. If instead, you maintain a pointer/iterator to the end of the list, you can insert each element in O(1) time, avoiding O(N^2) and also getting rid of the 2x constant.

In theory, a specialization of std::back_insert_iterator<std::list<T> > could be made to do this. Such a specialization may already exist.
• 05-21-2008
CornedBee
What for? std::list is a doubly linked list and maintains a reference to the back already. Its push_back() is O(1).

std::forward_list (coming in C++09) does not provide push_back, though. That's because of the design principle that there's zero overhead compared to a hand-written singly linked list, and a pointer to the last element would incur such overhead.
• 05-22-2008
iMalc
Quote:

Originally Posted by CornedBee
What for? std::list is a doubly linked list and maintains a reference to the back already. Its push_back() is O(1).

std::forward_list (coming in C++09) does not provide push_back, though. That's because of the design principle that there's zero overhead compared to a hand-written singly linked list, and a pointer to the last element would incur such overhead.

I'd say that it's more likely that it doesn't hold a pointer to the last element because it takes O(n) time to update it when you delete the last item.
• 05-22-2008
CornedBee
It takes O(n) time to delete the last item in the first place, and in the course of that you can adjust the pointer at no additional complexity cost. So no, that's not it.
• 05-22-2008
manav
I do not have time and the code is missing now, after reinstallation of XP, but adding to a singly linked list can be achieved in O(1). Instead of iterating over the list from start to end, to find where to append new node, just keep the last added node saved.
• 05-22-2008
matsp
Quote:

Originally Posted by manav
I do not have time and the code is missing now, after reinstallation of XP, but adding to a singly linked list can be achieved in O(1). Instead of iterating over the list from start to end, to find where to append new node, just keep the last added node saved.

Yes, the debate is about deleting the last node, which requires traipsing through the list to find the previous to last node.

--
Mats
• 05-23-2008
iMalc
Quote:

Originally Posted by CornedBee
It takes O(n) time to delete the last item in the first place, and in the course of that you can adjust the pointer at no additional complexity cost. So no, that's not it.

Sure, if you pretend there are no such things as iterators.:eek:

If you have an iterator pointing to the last element then you could not delete the item it points to AND correctly update the ltail pointer in anything less than O(n) time. That's on top of the fact that iterating through the list is also O(n). Therefore if you were to iterate over the list backwards, deleting as you go, it would be O(n*n).
I'm pretty sure this would be the main reason why there is no tail pointer.
• 05-23-2008
Magos
If you complain about bad efficiency when trying to remove the last element of a singly linked list, then that's your problem :)
I'd use a doubly linked list, and live happily ever after!
Show 80 post(s) from this thread on one page
Page 1 of 3 123 Last