I am about to setup the VTS and CTS tests for our AOSP. Both of the test suites are using the Trade Federation test framework. What is confusing me is how to run the different test plans.
According to the documentation (https://source.android.com/compatibility/vts/systems) for the VTS one has to decide which test plan to run. And then use the run command to test it.
E.g. If I want to run the default VTS test plan I run it with.
me@computer> vts-tradefed
vts-tf > run vts
This will launch a number of tests to the connected device.
Next, when launching the CTS tests, I expected to call the corresponding functions, or what appears to be. With the following instructions I expected to run CTS tests with the test plan called "cts".
me@computer> cts-tradefed
cts-tf > run cts
This seems to work fine and the tests appears to start. But then I read in the manual for the CTS (https://source.android.com/compatibility/cts/run) that the cts shall be executed as run cts --plan <test-plan>
. And they give the example run cts --plan CTS
below to run the default cts plan.
Start the default test plan (contains all test packages) by appending: run cts --plan CTS . This kicks off all CTS tests required for compatibility.
For CTS v1 (Android 6.0 and earlier), enter list plans to view a list of test plans in the repository or list packages to view a list of test packages in the repository. For CTS v2 (Android 7.0 and later), enter list modules to see a list of test modules.
Alternately, run the CTS plan of your choosing from the command line using: cts-tradefed run cts --plan
When testing it it seems to work as well. Two thinks makes me wonder. First of all the test plan in the example are refered to with capital letters, i.e., "CTS" instead of "cts". Secondly the run-command seems to work completely different here. For me it makes sense that the run
-command is a built in tradefed command, and that its argument should be the name of the test plan. This is also confirmed by tradefed itself.
VTS help:
vts-tf > help run
r(?:un)? help:
command <config> [options] Run the specified command
<config> [options] Shortcut for the above: run specified command
cmdfile <cmdfile.txt> Run the specified commandfile
commandAndExit <config> [options] Run the specified command, and run 'exit -c' immediately afterward
cmdfileAndExit <cmdfile.txt> Run the specified commandfile, and run 'exit -c' immediately afterward
----- Vendor Test Suite specific options -----
<plan> --module/-m <module> Run a test module
<plan> --module/-m <module> --test/-t <test_name> Run a specific test from the module. Test name can be <package>.<class>, <package>.<class>#<method> or <native_binary_name>
Available Options:
--serial/-s <device_id>: The device to run the test on
--abi/-a <abi> : The ABI to run the test against
--logcat-on-failure : Capture logcat when a test fails
--bugreport-on-failure : Capture a bugreport when a test fails
--screenshot-on-failure: Capture a screenshot when a test fails
--shard-count <shards>: Shards a run into the given number of independent chunks, to run on multiple devices in parallel.
----- In order to retry a previous run -----
retry --retry <session id to retry> [--retry-type <FAILED | NOT_EXECUTED>]
Without --retry-type, retry will run both FAIL and NOT_EXECUTED tests
CTS help:
cts-tf > help run
r(?:un)? help:
command <config> [options] Run the specified command
<config> [options] Shortcut for the above: run specified command
cmdfile <cmdfile.txt> Run the specified commandfile
commandAndExit <config> [options] Run the specified command, and run 'exit -c' immediately afterward
cmdfileAndExit <cmdfile.txt> Run the specified commandfile, and run 'exit -c' immediately afterward
----- Compatibility Test Suite specific options -----
<plan> --module/-m <module> Run a test module
<plan> --module/-m <module> --test/-t <test_name> Run a specific test from the module. Test name can be <package>.<class>, <package>.<class>#<method> or <native_binary_name>
Available Options:
--serial/-s <device_id>: The device to run the test on
--abi/-a <abi> : The ABI to run the test against
--logcat-on-failure : Capture logcat when a test fails
--bugreport-on-failure : Capture a bugreport when a test fails
--screenshot-on-failure: Capture a screenshot when a test fails
--shard-count <shards>: Shards a run into the given number of independent chunks, to run on multiple devices in parallel.
----- In order to retry a previous run -----
retry --retry <session id to retry> [--retry-type <FAILED | NOT_EXECUTED>]
Without --retry-type, retry will run both FAIL and NOT_EXECUTED tests
The explanations are pretty much identical. So it actually makes sense to run with run cts
and run vts
repectively. Is this just a silly question and am I completely wrong? Since these tests are important for our compability I want to be certain to have them run in the correct way.
To run a plan (either cts or vts), you can use different commands according to your selective need:
To run complete vts or cts tests: run <plan>
e.g. run cts / run vts
To run specific module in a plan: run <plan> -m <module>
e.g run cts -m CtsMyDisplayTestCases (module name should be same as mentioned in LOCAL_PACAKGE_NAME present in your Android.mk)
To run specific test class containing multiple tests of a specific module in a plan: run <plan> -m <module> -t <packageName.className>
e.g run cts -m CtsMyDisplayTestCases -t android.display.cts.ScreenTests (This command will run all tests present in test class 'ScreenTests', package name is same as fixed in AndroidManifest.xml)
To run specific test case in a test class of a specific module in a plan: run <plan> -m <module> -t <packageName.className#testName>
e.g run cts -m CtsMyDisplayTestCases -t android.display.cts.ScreenTests#testDisplayName (This command will run testDisplayName test case present in test class 'ScreenTests', package name is same as fixed in AndroidManifest.xml)
You can also check AOSP/cts/ directory to get basic idea of naming conventions and working.