Can someone explain amortized complexity in layman's terms? I've been having a hard time finding a precise definition online and I don't know how it entirely relates to the analysis of algorithms. Anything useful, even if externally referenced, would be highly appreciated.
Amortized complexity is the total expense per operation, evaluated over a sequence of operations.
The idea is to guarantee the total expense of the entire sequence, while permitting individual operations to be much more expensive than the amortized cost.
Example:
The behavior of C++ std::vector<>
. When push_back()
increases the vector size above its pre-allocated value, it doubles the allocated length.
So a single push_back()
may take O(N)
time to execute (as the contents of the array are copied to the new memory allocation).
However, because the size of the allocation was doubled, the next N-1
calls to push_back()
will each take O(1)
time to execute. So, the total of N
operations will still take O(N)
time; thereby giving push_back()
an amortized cost of O(1)
per operation.
Unless otherwise specified, amortized complexity is an asymptotic worst-case guarantee for any sequence of operations. This means:
Just as with non-amortized complexity, the big-O notation used for amortized complexity ignores both fixed initial overhead and constant performance factors. So, for the purpose of evaluating big-O amortized performance, you can generally assume that any sequence of amortized operations will be "long enough" to amortize away a fixed startup expense. Specifically, for the std::vector<>
example, this is why you don't need to worry about whether you will actually encounter N
additional operations: the asymptotic nature of the analysis already assumes that you will.
Besides arbitrary length, amortized analysis doesn't make assumptions about the sequence of operations whose cost you are measuring -- it is a worst-case guarantee on any possible sequence of operations. No matter how badly the operations are chosen (say, by a malicious adversary!), an amortized analysis must guarantee that a long enough sequence of operations may not cost consistently more than the sum of their amortized costs. This is why (unless specifically mentioned as a qualifier) "probability" and "average case" are not relevant to amortized analysis -- any more than they are to an ordinary worst-case big-O analysis!