I've been working on adding a Quadtree to a program that I'm writing, and I can't help but notice that there are few well explained/performing tutorials for the implementation that I'm looking for.
Specifically, a list of the methods and pseudocode for how to implement them (or just a description of their processes) that are commonly used in a Quadtree (retrieve, insert, remove, etc.) is what I'm looking for, along with maybe some tips to improve performance. This is for collision detection, so it'd be best to be explained with 2d rectangles in mind, as they are the objects that will be stored.
All right, I'll take a shot at this. First a teaser to show the results of what I'll propose involving 20,000 agents (just something I whipped up real quick for this specific question):
The GIF has extremely reduced frame rate and significantly lower res to fit the 2 MB maximum for this site. Here's a video if you want to see the thing at close to full speed: https://streamable.com/3pgmn.
And a GIF with 100k though I had to fiddle with it so much and had to turn off the quadtree lines (didn't seem to want to compress as much with them on) as well as change the way the agents looked to get it to fit in 2 megabytes (I wish making a GIF was as easy as coding a quadtree):
The simulation with 20k agents takes ~3 megabytes of RAM. I can also easily handle 100k smaller agents without sacrificing frame rates, though it leads to a bit of a mess on the screen to the point where you can barely see what's going on as in the GIF above. This is all running in just one thread on my i7 and I'm spending almost half the time according to VTune just drawing this stuff on the screen (just using some basic scalar instructions to plot things one pixel at a time in CPU).
And here's a video with 100,000 agents though it's hard to see what's going on. It's kind of a big video and I couldn't find any decent way to compress it without the whole video turning into a mush (might need to download or cache it first to see it stream at reasonable FPS). With 100k agents the simulation takes around 4.5 megabytes of RAM and the memory use is very stable after running the simulation for about 5 seconds (stops going up or down since it ceases to heap allocate). Same thing in slow motion.
Efficient Quadtree for Collision Detection
All right, so actually quadtrees are not my favorite data structure for this purpose. I tend to prefer a grid hierarchy, like a coarse grid for the world, a finer grid for a region, and an even finer grid for a sub-region (3 fixed levels of dense grids, and no trees involved), with row-based optimizations so that a row that has no entities in it will be deallocated and turned into a null pointer, and likewise completely empty regions or sub-regions turned into nulls. While this simple implementation of the quadtree running in one thread can handle 100k agents on my i7 at 60+ FPS, I've implemented grids that can handle a couple million agents bouncing off each other every frame on older hardware (an i3). Also I always liked how grids made it very easy to predict how much memory they'll require, since they don't subdivide cells. But I'll try to cover how to implement a reasonably efficient quadtree.
Note that I won't go into the full theory of the data structure. I'm assuming that you already know that and are interested in improving performance. I'm also just going into my personal way of tackling this problem which seems to outperform most of the solutions I find online for my cases, but there are many decent ways and these solutions are tailor-fitted to my use cases (very large inputs with everything moving every frame for visual FX in films and television). Other people probably optimize for different use cases. When it comes to spatial indexing structures in particular, I really think the efficiency of the solution tells you more about the implementer than the data structure. Also the same strategies I'll propose to speeding things up also apply in 3 dimensions with octrees.
Node Representation
So first of all, let's cover the node representation:
// Represents a node in the quadtree.
struct QuadNode
{
// Points to the first child if this node is a branch or the first
// element if this node is a leaf.
int32_t first_child;
// Stores the number of elements in the leaf or -1 if it this node is
// not a leaf.
int32_t count;
};
It's a total of 8 bytes, and this is very important as it's a key part of the speed. I actually use a smaller one (6 bytes per node) but I'll leave that as an exercise to the reader.
You can probably do without the count
. I include that for pathological cases to avoid linearly traversing the elements and counting them each time a leaf node might split. In most common cases a node shouldn't store that many elements. However, I work in visual FX and the pathological cases aren't necessarily rare. You can encounter artists creating content with a boatload of coincident points, massive polygons that span the entire scene, etc, so I end up storing a count
.
Where are the AABBs?
So one of the first things people might be wondering is where the bounding boxes (rectangles) are for the nodes. I don't store them. I compute them on the fly. I'm kinda surprised most people don't do that in the code I've seen. For me, they're only stored with the tree structure (basically just one AABB for the root).
That might seem like it'd be more expensive to be computing these on the fly, but reducing the memory use of a node can proportionally reduce cache misses when you traverse the tree, and those cache miss reductions tend to be more significant than having to do a couple of bitshifts and some additions/subtractions during traversal. Traversal looks like this:
static QuadNodeList find_leaves(const Quadtree& tree, const QuadNodeData& root, const int rect[4])
{
QuadNodeList leaves, to_process;
to_process.push_back(root);
while (to_process.size() > 0)
{
const QuadNodeData nd = to_process.pop_back();
// If this node is a leaf, insert it to the list.
if (tree.nodes[nd.index].count != -1)
leaves.push_back(nd);
else
{
// Otherwise push the children that intersect the rectangle.
const int mx = nd.crect[0], my = nd.crect[1];
const int hx = nd.crect[2] >> 1, hy = nd.crect[3] >> 1;
const int fc = tree.nodes[nd.index].first_child;
const int l = mx-hx, t = my-hx, r = mx+hx, b = my+hy;
if (rect[1] <= my)
{
if (rect[0] <= mx)
to_process.push_back(child_data(l,t, hx, hy, fc+0, nd.depth+1));
if (rect[2] > mx)
to_process.push_back(child_data(r,t, hx, hy, fc+1, nd.depth+1));
}
if (rect[3] > my)
{
if (rect[0] <= mx)
to_process.push_back(child_data(l,b, hx, hy, fc+2, nd.depth+1));
if (rect[2] > mx)
to_process.push_back(child_data(r,b, hx, hy, fc+3, nd.depth+1));
}
}
}
return leaves;
}
Omitting the AABBs is one of the most unusual things I do (I keep looking for other people doing it just to find a peer and fail), but I've measured the before and after and it did reduce times considerably, at least on very large inputs, to compact the quadtree node substantially and just compute AABBs on the fly during traversal. Space and time aren't always diametrically opposed. Sometimes reducing space also means reducing time given how much performance is dominated by the memory hierarchy these days. I've even sped up some real world operations applied on massive inputs by compressing the data to quarter the memory use and decompressing on the fly.
I don't know why many people choose to cache the AABBs: whether it's programming convenience or if it's genuinely faster in their cases. Yet for data structures which split evenly down the center like regular quadtrees and octrees, I'd suggest measuring the impact of omitting the AABBs and computing them on the fly. You might be quite surprised. Of course it makes sense to store AABBs for structures that don't split evenly like Kd-trees and BVHs as well as loose quadtrees.
Floating-Point
I don't use floating-point for spatial indexes and maybe that's why I see improved performance just computing AABBs on the fly with right shifts for division by 2 and so forth. That said, at least SPFP seems really fast nowadays. I don't know since I haven't measured the difference. I just use integers by preference even though I'm generally working with floating-point inputs (mesh vertices, particles, etc). I just end up converting them to integer coordinates for the purpose of partitioning and performing spatial queries. I'm not sure if there's any major speed benefit of doing this anymore. It's just a habit and preference since I find it easier to reason about integers without having to think about denormalized FP and all that.
Centered AABBs
While I only store a bounding box for the root, it helps to use a representation that stores a center and half size for nodes while using a left/top/right/bottom representation for queries to minimize the amount of arithmetic involved.
Contiguous Children
This is likewise key, and if we refer back to the node rep:
struct QuadNode
{
int32_t first_child;
...
};
We don't need to store an array of children because all 4 children are contiguous:
first_child+0 = index to 1st child (TL)
first_child+1 = index to 2nd child (TR)
first_child+2 = index to 3nd child (BL)
first_child+3 = index to 4th child (BR)
This not only significantly reduces cache misses on traversal but also allows us to significantly shrink our nodes which further reduces cache misses, storing only one 32-bit index (4 bytes) instead of an array of 4 (16 bytes).
This does mean that if you need to transfer elements to just a couple of quadrants of a parent when it splits, it must still allocate all 4 child leaves to store elements in just two quadrants while having two empty leaves as children. However, the trade-off is more than worth it performance-wise at least in my use cases, and remember that a node only takes 8 bytes given how much we've compacted it.
When deallocating children, we deallocate all four at a time. I do this in constant-time using an indexed free list, like so:
Except we're pooling out memory chunks containing 4 contiguous elements instead of one at a time. This makes it so we usually don't need to involve any heap allocations or deallocations during the simulation. A group of 4 nodes gets marked as freed indivisibly only to then be reclaimed indivisibly in a subsequent split of another leaf node.
Deferred Cleanup
I don't update the quadtree's structure right away upon removing elements. When I remove an element, I just descend down the tree to the child node(s) it occupies and then remove the element, but I don't bother to do anything more just yet even if the leaves become empty.
Instead I do a deferred cleanup like this:
void Quadtree::cleanup()
{
// Only process the root if it's not a leaf.
SmallList<int> to_process;
if (nodes[0].count == -1)
to_process.push_back(0);
while (to_process.size() > 0)
{
const int node_index = to_process.pop_back();
QuadNode& node = nodes[node_index];
// Loop through the children.
int num_empty_leaves = 0;
for (int j=0; j < 4; ++j)
{
const int child_index = node.first_child + j;
const QuadNode& child = nodes[child_index];
// Increment empty leaf count if the child is an empty
// leaf. Otherwise if the child is a branch, add it to
// the stack to be processed in the next iteration.
if (child.count == 0)
++num_empty_leaves;
else if (child.count == -1)
to_process.push_back(child_index);
}
// If all the children were empty leaves, remove them and
// make this node the new empty leaf.
if (num_empty_leaves == 4)
{
// Push all 4 children to the free list.
nodes[node.first_child].first_child = free_node;
free_node = node.first_child;
// Make this node the new empty leaf.
node.first_child = -1;
node.count = 0;
}
}
}
This is called at the end of every single frame after moving all the agents. The reason I do this kind of deferred removal of empty leaf nodes in multiple iterations and not all at once in the process of removing a single element is that element A
might move to node N2
, making N1
empty. However, element B
might, in the same frame, move to N1
, making it occupied again.
With the deferred cleanup, we can handle such cases without unnecessarily removing children only to add them right back when another element moves into that quadrant.
Moving elements in my case is a straightforward: 1) remove element, 2) move it, 3) reinsert it to the quadtree. After we move all the elements and at the end of a frame (not time step, there could be multiple time steps per frame), the cleanup
function above is called to remove the children from a parent which has 4 empty leaves as children, which effectively turns that parent into the new empty leaf which might then be cleaned up in the next frame with a subsequent cleanup
call (or not if things get inserted to it or if the empty leaf's siblings are non-empty).
Let's look at the deferred cleanup visually:
Starting with this, let's say we remove some elements from the tree leaving us with 4 empty leaves:
At this point, if we call cleanup
, it will remove 4 leaves if it finds 4 empty child leaves and turn the parent into an empty leaf, like so:
Let's say we remove some more elements:
... and then call cleanup
again:
If we call it yet again, we end up with this:
... at which point the root itself turns into an empty leaf. However, the cleanup method never removes the root (it only removes children). Again the main point of doing it deferred this way and in multiple steps is to reduce the amount of potential redundant work that could occur per time step (which can be a lot) if we did this all immediately every single time an element is removed from the tree. It also helps to distribute that works across frames to avoid stutters.
TBH, I originally applied this "deferred cleanup" technique in a DOS game I wrote in C out of sheer laziness! I didn't want to bother with descending down the tree, removing elements, and then removing nodes in a bottom-up fashion back then because I originally wrote the tree to favor top-down traversal (not top-down and back up again) and really thought this lazy solution was a productivity compromise (sacrificing optimal performance to get implemented faster). However, many years later, I actually got around to implementing quadtree removal in ways that immediately started removing nodes and, to my surprise, I actually significantly made it slower with more unpredictable and stuttery frame rates. The deferred cleanup, in spite of originally being inspired by my programmer laziness, was actually (and accidentally) a very effective optimization for dynamic scenes.
Singly-Linked Index Lists for Elements
For elements, I use this representation:
// Represents an element in the quadtree.
struct QuadElt
{
// Stores the ID for the element (can be used to
// refer to external data).
int id;
// Stores the rectangle for the element.
int x1, y1, x2, y2;
};
// Represents an element node in the quadtree.
struct QuadEltNode
{
// Points to the next element in the leaf node. A value of -1
// indicates the end of the list.
int next;
// Stores the element index.
int element;
};
I use an "element node" which is separate from "element". An element is only inserted once to the quadtree no matter how many cells it occupies. However, for each cell it occupies, an "element node" is inserted which indexes that element.
The element node is a singly-linked index list node into an array, and also using the free list method above. This incurs some more cache misses over storing all the elements contiguously for a leaf. However, given that this quadtree is for very dynamic data which is moving and colliding every single time step, it generally takes more processing time than it saves to seek out a perfectly contiguous representation for the leaf elements (you would effectively have to implement a variable-sized memory allocator which is really fast, and that's far from an easy thing to do). So I use the singly-linked index list which allows a free list constant-time approach to allocation/deallocation. When you use that representation, you can transfer elements from split parents to new leaves by just changing a few integers.
SmallList<T>
Oh, I should mention this. Naturally it helps if you don't heap allocate just to store a temporary stack of nodes for non-recursive traversal. SmallList<T>
is similar to vector<T>
except it won't involve a heap allocation until you insert more than 128 elements to it. It's similar to SBO string optimizations in the C++ standard lib. It's something I implemented and have been using for ages and it does help a lot to make sure you use the stack whenever possible.
Tree Representation
Here's the representation of the quadtree itself:
struct Quadtree
{
// Stores all the elements in the quadtree.
FreeList<QuadElt> elts;
// Stores all the element nodes in the quadtree.
FreeList<QuadEltNode> elt_nodes;
// Stores all the nodes in the quadtree. The first node in this
// sequence is always the root.
std::vector<QuadNode> nodes;
// Stores the quadtree extents.
QuadCRect root_rect;
// Stores the first free node in the quadtree to be reclaimed as 4
// contiguous nodes at once. A value of -1 indicates that the free
// list is empty, at which point we simply insert 4 nodes to the
// back of the nodes array.
int free_node;
// Stores the maximum depth allowed for the quadtree.
int max_depth;
};
As pointed out above, we store a single rectangle for the root (root_rect
). All sub-rects are computed on the fly. All nodes are stored in contiguously in an array (std::vector<QuadNode>
) along with the elements and element nodes (in FreeList<T>
).
FreeList<T>
I use a FreeList
data structure which is basically an array (and random-access sequence) that lets you remove elements from anywhere in constant-time (leaving holes behind which get reclaimed upon subsequent insertions in constant-time). Here's a simplified version which doesn't bother with handling non-trivial data types (doesn't use placement new or manual destruction calls):
/// Provides an indexed free list with constant-time removals from anywhere
/// in the list without invalidating indices. T must be trivially constructible
/// and destructible.
template <class T>
class FreeList
{
public:
/// Creates a new free list.
FreeList();
/// Inserts an element to the free list and returns an index to it.
int insert(const T& element);
// Removes the nth element from the free list.
void erase(int n);
// Removes all elements from the free list.
void clear();
// Returns the range of valid indices.
int range() const;
// Returns the nth element.
T& operator[](int n);
// Returns the nth element.
const T& operator[](int n) const;
private:
union FreeElement
{
T element;
int next;
};
std::vector<FreeElement> data;
int first_free;
};
template <class T>
FreeList<T>::FreeList(): first_free(-1)
{
}
template <class T>
int FreeList<T>::insert(const T& element)
{
if (first_free != -1)
{
const int index = first_free;
first_free = data[first_free].next;
data[index].element = element;
return index;
}
else
{
FreeElement fe;
fe.element = element;
data.push_back(fe);
return static_cast<int>(data.size() - 1);
}
}
template <class T>
void FreeList<T>::erase(int n)
{
data[n].next = first_free;
first_free = n;
}
template <class T>
void FreeList<T>::clear()
{
data.clear();
first_free = -1;
}
template <class T>
int FreeList<T>::range() const
{
return static_cast<int>(data.size());
}
template <class T>
T& FreeList<T>::operator[](int n)
{
return data[n].element;
}
template <class T>
const T& FreeList<T>::operator[](int n) const
{
return data[n].element;
}
I have one which does work with non-trivial types and provides iterators and so forth but is much more involved. These days I tend to work more with trivially constructible/destructible C-style structs anyway (only using non-trivial user-defined types for high-level stuff).
Maximum Tree Depth
I prevent the tree from subdividing too much by specifying a max depth allowed. For the quick simulation I whipped up, I used 8. For me this is crucial since again, in VFX I encounter pathological cases a lot including content created by artists with lots of coincident or overlapping elements which, without a maximum tree depth limit, could want it to subdivide indefinitely.
There is a bit of fine-tuning if you want optimal performance with respect to max depth allowed and how many elements you allow to be stored in a leaf before it splits into 4 children. I tend to find the optimal results are gained with something around 8 elements max per node before it splits, and a max depth set so that the smallest cell size is a little over the size of your average agent (otherwise more single agents could be inserted into multiple leaves).
Collision and Queries
There are a couple of ways to do the collision detection and queries. I often see people do it like this:
for each element in scene:
use quad tree to check for collision against other elements
This is very straightforward but the problem with this approach is that the first element in the scene might be in a totally different location in the world from the second. As a result, the paths we take down the quadtree could be totally sporadic. We might traverse one path to a leaf and then want to go down that same path again for the first element as, say, the 50,000th element. By this time the nodes involved in that path may have already been evicted from the CPU cache. So I recommend doing it this way:
traversed = {}
gather quadtree leaves
for each leaf in leaves:
{
for each element in leaf:
{
if not traversed[element]:
{
use quad tree to check for collision against other elements
traversed[element] = true
}
}
}
While that's quite a bit more code and requires we keep a traversed
bitset or parallel array of some sort to avoid processing elements twice (since they may be inserted in more than one leaf), it helps make sure that we descend the same paths down the quadtree throughout the loop. That helps keep things much more cache-friendly. Also if after attempting to move the element in the time step, it's still encompassed entirely in that leaf node, we don't even need to work our way back up again from the root (we can just check that one leaf only).
Common Inefficiencies: Things to Avoid
While there are many ways to skin the cat and achieve an efficient solution, there is a common way to achieve a very inefficient solution. And I've encountered my share of very inefficient quadtrees, kd trees, and octrees in my career working in VFX. We're talking over a gigabyte of memory use just to partition a mesh with 100k triangles while taking 30 secs to build, when a decent implementation should be able to do the same hundreds of times a second and just take a few megs. There are many people whipping these up to solve problems who are theoretical wizards but didn't pay much attention to memory efficiency.
So the absolute most common no-no I see is to store one or more full-blown containers with each tree node. By full-blown container, I mean something that owns and allocates and frees its own memory, like this:
struct Node
{
...
// Stores the elements in the node.
List<Element> elements;
};
And List<Element>
might be a list in Python, an ArrayList
in Java or C#, std::vector
in C++, etc: some data structure that manages its own memory/resources.
The problem here is that while such containers are very efficiently implemented for storing a large number of elements, all of them in all languages are extremely inefficient if you instantiate a boatload of them only to store a few elements in each one. One of the reasons is that the container metadata tends to be quite explosive in memory usage at such a granular level of a single tree node. A container might need to store a size, capacity, a pointer/reference to data it allocates, etc, and all for a generalized purpose so it might use 64-bit integers for size and capacity. As a result the metadata just for an empty container might be 24 bytes which is already 3 times larger than the entirety of the node representation I proposed, and that's just for an empty container designed to store elements in leaves.
Furthermore each container often wants to either heap/GC-allocate on insertion or require even more preallocated memory in advance (at which point it might take 64 bytes just for the container itself). So that either becomes slow because of all the allocations (note that GC allocations are really fast initially in some implementations like JVM, but that's only for the initial burst Eden cycle) or because we're incurring a boatload of cache misses because the nodes are so huge that barely any fit into the lower levels of the CPU cache on traversal, or both.
Yet this is a very natural inclination and makes intuitive sense since we talk about these structures theoretically using language like, "Elements are stored in the leaves" which suggests storing a container of elements in leaf nodes. Unfortunately it has an explosive cost in terms of memory use and processing. So avoid this if the desire is to create something reasonably efficient. Make the Node
share and point to (refer to) or index memory allocated and stored for the entire tree, not for every single individual node. In actuality the elements shouldn't be stored in the leaves.
Elements should be stored in the tree and leaf nodes should index or point to those elements.
Conclusion
Phew, so those are the main things I do to achieve what is hopefully considered a decent-performing solution. I hope that helps. Note that I am aiming this at a somewhat advanced level for people who have already implemented quadtrees at least once or twice. If you have any questions, feel free to shoot.
Since this question is a bit broad, I might come and edit it and keep tweaking and expanding it over time if it doesn't get closed (I love these types of questions since they give us an excuse to write about our experiences working in the field, but the site doesn't always like them). I'm also hoping some experts might jump in with alternative solutions I can learn from and perhaps use to improve mine further.
Again quadtrees aren't actually my favorite data structure for extremely dynamic collision scenarios like this. So I probably have a thing or two to learn from people who do favor quadtrees for this purpose and have been tweaking and tuning them for years. Mostly I use quadtrees for static data that doesn't move around every frame, and for those I use a very different representation from the one proposed above.