On CLRS's textbook "Introduction to Algorithm", there's such paragraph on pg. 258.
We can delete an element in O(1) time if the lists are doubly linked. (Note that CHAINED-HASH-DELETE takes as input an element x and not its key k, so that we don't have to search for x first. If the hash table supports deletion, then its linked list should be doubly linked so that we can delete an item quickly. If the lists were only singly linked, then to delete element x, we would first have to find x in the list so that we could update the next attribute of x's predecessor. With singly linked lists, both deletion and searching would have the same asymptotic running times).
What puzzle me is this big parenthses, I failed to understand its logic. With doubly linked list, one still have to find x in order to delete it, how is this different from singly linked list? Please help me to understand it!
The problem presented here is : consider you have are looking at a particular element of a hashtable. How costly is it to delete it?
Suppose you have a simple linked list :
v ----> w ----> x ----> y ----> z
|
you're here
Now if you remove x
, you need to connect w
to y
to keep your list linked. You need to access w
and tell it to point to y
(you want to have w ----> y
). But you can't access w
from x
because it's simply linked! Thus you have to go through all your list to find w
in O(n) operations, and then tell it to link to y
. That's bad.
Then, suppose you're doubly-linked :
v <---> w <---> x <---> y <---> z
|
you're here
Cool, you can access w and y from here, so you can connect the two (w <---> y
) in O(1) operation!