While trying to understand the difference between Theta and O notation I came across the following statement :
The Theta-notation asymptotically bounds a function from above and below. When
we have only an asymptotic upper bound, we use O-notation.
But I do not understand this. The book explains it mathematically, but it's too complex and gets really boring to read when I am really not understanding.
Can anyone explain the difference between the two using simple, yet powerful examples.
Big O is giving only upper asymptotic bound, while big Theta is also giving a lower bound.
Everything that is Theta(f(n))
is also O(f(n))
, but not the other way around.
T(n)
is said to be Theta(f(n))
, if it is both O(f(n))
and Omega(f(n))
For this reason big-Theta is more informative than big-O notation, so if we can say something is big-Theta, it's usually preferred. However, it is harder to prove something is big Theta, than to prove it is big-O.
For example, merge sort is both O(n*log(n))
and Theta(n*log(n))
, but it is also O(n2), since n2 is asymptotically "bigger" than it. However, it is NOT Theta(n2), Since the algorithm is NOT Omega(n2).
Omega(n)
is asymptotic lower bound. If T(n)
is Omega(f(n))
, it means that from a certain n0
, there is a constant C1
such that T(n) >= C1 * f(n)
. Whereas big-O says there is a constant C2
such that T(n) <= C2 * f(n))
.
All three (Omega, O, Theta) give only asymptotic information ("for large input"):
Note that this notation is not related to the best, worst and average cases analysis of algorithms. Each one of these can be applied to each analysis.