From the linked-list tag wiki excerpt:
A linked list is a data structure in which the elements contain references to the next (and optionally the previous) element. Linked lists offer O(1) insert and removal at any position, O(1) list concatenation, and O(1) access at the front (and optionally back) positions as well as O(1) next element access. Random access has O(N) complexity and is usually unimplemented.
(emphasis mine)
I was surprised to read this – how can the list insert at a random index with a lower complexity than simply reading that index?
So I looked at the source code for java.util.LinkedList
. The add(int, E)
method is:
public void add(int index, E element) {
addBefore(element, (index==size ? header : entry(index)));
}
The addBefore(E, Entry<E>
method is simply pointer reassignment, but there's also the entry(int)
method:
if (index < 0 || index >= size)
throw new IndexOutOfBoundsException("Index: "+index+
", Size: "+size);
Entry<E> e = header;
if (index < (size >> 1)) {
for (int i = 0; i <= index; i++)
e = e.next;
} else {
for (int i = size; i > index; i--)
e = e.previous;
}
return e;
}
Even with the half-size optimization, the for
loop in here (one or the other) seems to me a dead giveaway that this method (and thus add(int, E)
) operates in a minimum worst-case scenario of O(n) time, and certainly not constant time.
What am I missing? Am I misunderstanding the big-O notation?
This is because the article that you are reading considered "getting to that index" as a separate operation. The article assumes that you are already at the index you wish to perform add(int, E).
To conclude:
Insert or Remove operation = O(1)
Finding node at nth index = O(n)