Until now, I've used an improvised unit testing procedure - basically a whole load of unit test programs run automatically by a batch file. Although a lot of these explicitly check their results, a lot more cheat - they dump out results to text files which are versioned. Any change in the test results gets flagged by subversion and I can easily identify what the change was. Many of the tests output dot files or some other form that allows me to get a visual representation of the output.
The trouble is that I'm switching to using cmake. Going with the cmake flow means using out-of-source builds, which means that convenience of dumping results out in a shared source/build folder and versioning them along with the source doesn't really work.
As a replacement, what I'd like to do is to tell the unit test tool where to find files of expected results (in the source tree) and get it to do the comparison. On failure, it should provide the actual results and diff listings.
Is this possible, or should I take a completely different approach?
Obviously, I could ignore ctest and just adapt what I've always done to out-of-source builds. I could version my folder-where-all-the-builds-live, for instance (with liberal use of 'ignore' of course). Is that sane? Probably not, as each build would end up with a separate copy of the expected results.
Also, any advice on the recommended way to do unit testing with cmake/ctest gratefuly received. I wasted a fair bit of time with cmake, not because it's bad, but because I didn't understand how best to work with it.
EDIT
In the end, I decided to keep the cmake/ctest side of the unit testing as simple as possible. To test actual against expected results, I found a home for the following function in my library...
bool Check_Results (std::ostream &p_Stream ,
const char *p_Title ,
const char **p_Expected,
const std::ostringstream &p_Actual )
{
std::ostringstream l_Expected_Stream;
while (*p_Expected != 0)
{
l_Expected_Stream << (*p_Expected) << std::endl;
p_Expected++;
}
std::string l_Expected (l_Expected_Stream.str ());
std::string l_Actual (p_Actual.str ());
bool l_Pass = (l_Actual == l_Expected);
p_Stream << "Test: " << p_Title << " : ";
if (l_Pass)
{
p_Stream << "Pass" << std::endl;
}
else
{
p_Stream << "*** FAIL ***" << std::endl;
p_Stream << "===============================================================================" << std::endl;
p_Stream << "Expected Results For: " << p_Title << std::endl;
p_Stream << "-------------------------------------------------------------------------------" << std::endl;
p_Stream << l_Expected;
p_Stream << "===============================================================================" << std::endl;
p_Stream << "Actual Results For: " << p_Title << std::endl;
p_Stream << "-------------------------------------------------------------------------------" << std::endl;
p_Stream << l_Actual;
p_Stream << "===============================================================================" << std::endl;
}
return l_Pass;
}
A typical unit test now looks something like...
bool Test0001 ()
{
std::ostringstream l_Actual;
const char* l_Expected [] =
{
"Some",
"Expected",
"Results",
0
};
l_Actual << "Some" << std::endl
<< "Actual" << std::endl
<< "Results" << std::endl;
return Check_Results (std::cout, "0001 - not a sane test", l_Expected, l_Actual);
}
Where I need a re-usable data-dumping function, it takes a parameter of type std::ostream&
, so it can dump to an actual-results stream.
I'd use CMake's standalone scripting mode to run the tests and compare the outputs. Normally for a unit test program, you would write add_test(testname testexecutable)
, but you may run any command as a test.
If you write a script "runtest.cmake" and execute your unit test program via this, then the runtest.cmake script can do anything it likes - including using the cmake -E compare_files
utility. You want something like the following in your CMakeLists.txt file:
enable_testing()
add_executable(testprog main.c)
add_test(NAME runtestprog
COMMAND ${CMAKE_COMMAND}
-DTEST_PROG=$<TARGET_FILE:testprog>
-DSOURCEDIR=${CMAKE_CURRENT_SOURCE_DIR}
-P ${CMAKE_CURRENT_SOURCE_DIR}/runtest.cmake)
This runs a script (cmake -P runtest.cmake) and defines 2 variables: TEST_PROG, set to the path of the test executable, and SOURCEDIR, set to the current source directory. You need the first to know which program to run, the second to know where to find the expected test result files. The contents of runtest.cmake
would be:
execute_process(COMMAND ${TEST_PROG}
RESULT_VARIABLE HAD_ERROR)
if(HAD_ERROR)
message(FATAL_ERROR "Test failed")
endif()
execute_process(COMMAND ${CMAKE_COMMAND} -E compare_files
output.txt ${SOURCEDIR}/expected.txt
RESULT_VARIABLE DIFFERENT)
if(DIFFERENT)
message(FATAL_ERROR "Test failed - files differ")
endif()
The first execute_process
runs the test program, which will write out "output.txt". If that works, then the next execute_process
effectively runs cmake -E compare_files output.txt expected.txt
. The file "expected.txt" is the known good result in your source tree. If there are differences, it errors out so you can see the failed test.
What this doesn't do is print out the differences; CMake doesn't have a full "diff" implementation hidden away within it. At the moment you use Subversion to see what lines have changed, so an obvious solution is to change the last part to:
if(DIFFERENT)
configure_file(output.txt ${SOURCEDIR}/expected.txt COPYONLY)
execute_process(COMMAND svn diff ${SOURCEDIR}/expected.txt)
message(FATAL_ERROR "Test failed - files differ")
endif()
This overwrites the source tree with the build output on failure then runs svn diff on it. The problem is that you shouldn't really go changing the source tree in this way. When you run the test a second time, it passes! A better way is to install some visual diff tool and run that on your output and expected file.