In an algorithms course I'm taking, it's said that depth-first search (DFS) is far more space efficient than breadth-first search (BFS).
Why is that?
Although they are basically doing the same thing, in DFS we're stacking the current node's successors while in BFS we're enqueueing the successors.
Your confusion is stemming from the fact that you apparently assume that DFS algorithm can be obtained from BFS algorithm by replacing the FIFO queue with a LIFO stack.
This is a popular misconception - it is simply not true. The classic DFS algorithm cannot be obtained by replacing the BFS queue with a stack. The difference between these algorithms is much more significant.
If you take a BFS algorithm and simply replace the FIFO queue with a LIFO stack, you will obtain something that can be called a pseudo-DFS algorithm. This pseudo-DFS algorithm will indeed correctly reproduce the DFS vertex forward traversal sequence, but it will not have DFS space efficiency and it will not support DFS backward traversal (backtracking).
Meanwhile, the true classic DFS cannot be obtained from BFS by such a naive queue-to-stack replacement. The classic DFS is a completely different algorithm with significantly different core structure. True DFS is a genuinely recursive algorithm that uses stack for backtracking purposes, not for storing the vertex discovery "front" (as is the case in BFS). The most immediate consequence of that is that in DFS algorithm the maximum stack depth is equal to the maximum distance from the origin vertex in the DFS traversal. In BFS algorithm (as in the aforementioned pseudo-DFS) the maximum queue size is equal to the width of the largest vertex discovery front.
The most prominent and extreme example that illustrates the difference in peak memory consumption between DFS and BFS (as well as pseudo-DFS) is a star-graph: a single central vertex surrounded by a large number (say, 1000
) of peripheral vertices, with each peripheral vertex connected to the central vertex by an edge. If you run BFS on this graph using the central vertex as origin, the queue size will immediately jump to 1000
. The same thing will obviously happen if you use pseudo-DFS (i.e. if you simply replace the queue with a stack). But classic DFS algorithm will need stack depth of only 1
(!) to traverse this entire graph. See the difference? 1000
versus 1
. This is what is meant by better space efficiency of DFS.
Basically, take any book on algorithms, find a description of classic DFS and see how it works. You will notice that the difference between BFS and DFS is far more extensive that a mere queue vs. stack.
P.S. It should also be said that one can build an example of a graph that will have smaller peak memory consumption under BFS. So the statement about better space efficiency of DFS should be seen as something that might apply "on average" to some implied class of "nice" graphs.