Googletest: How to run tests asynchronously?

jotrocken picture jotrocken · Jul 2, 2014 · Viewed 7k times · Source

Given a large project with thousands of tests, some of which take multiple minutes to complete. When executed sequentially, the whole set of test takes more than an hour to finish. The testing time could be reduced by executing tests in parallel.

As far as I know there are no means to do that directly from googletest/mock, like a --async option. Or am I wrong?

One solution is to determine the tests which can run in parallel and write a script that starts each in a separate job, i.e.

./test --gtest_filter=TestSet.test1 &
./test --gtest_filter=TestSet.test2 &
...

But this would require additional maintenance effort and introduce another "layer" between the test code and its execution. I'd like a more convenient solution. For example, one could suffix the TEST and TEST_F macros and introduce TEST_ASYNC, TEST_F_ASYNC. Tests defined with TEST_ASYNC would then be executed by independent threads, starting at the same time.

How can this be achieved? Or is there another solution?

Answer

pbos picture pbos · Jul 8, 2017

Late response, but I'll put it here for anyone searching for a similar answer. Working on WebRTC I found a similar need to speed up our test execution. Executing all of our tests sequentially took more than 20 minutes, and a bunch of them spend at least some time waiting (so they don't even fully utilize a core).

Even for "proper unit tests" I'd argue this is still relevant, because there's a difference between your single-threaded tests taking 20 seconds and ~1 second to execute (if your workstation is massively parallel, this speedup is not uncommon).

To solve this for us, I developed a script that test executes tests in parallel. This is stable enough to run on our continuous integration, and released here: https://github.com/google/gtest-parallel/

This python script essentially takes --gtest_filter=Foo (which you can specify) of one or more gtest binary specified, splits them up on several workers and runs individual tests in parallel. This works fine so long as the tests are independent (don't write to shared files, etc). For tests that didn't work fine, we put them in a webrtc_nonparallel_tests binary and ran those separately, but the vast majority were already fine, and we fixed several of them because we wanted the speedup.