Running Tests

testr is taught how to run your tests by interepreting your .testr.conf file. For instance:

  [DEFAULT]
  test_command=foo $IDOPTION
  test_id_option=--bar $IDFILE

will cause testr run to run foo and process it as testr load would. Likewise testr run --failing will automatically create a list file listing just the failing tests, and then run foo --bar failing.list and process it as testr load would. failing.list will be a newline separated list of the test ids that your test runner outputs. If there are no failing tests, no test execution will happen at all.

Arguments passed to testr run are used to filter test ids that will be run - testr will query the runner for test ids and then apply each argument as a regex filter. Tests that match any of the given filters will be run. Arguments passed to run after a -- are passed through to your test runner command line. For instance, using the above config example testr run quux -- bar --no-plugins would query for test ids, filter for those that match quux and then run foo bar --load-list tempfile.list --no-plugins. Shell variables are expanded in these commands on platforms that have a shell.

Having setup a .testr.conf, a common workflow then becomes:

  # Fix currently broken tests - repeat until there are no failures.
  $ testr run --failing
  # Do a full run to find anything that regressed during the reduction process.
  $ testr run
  # And either commit or loop around this again depending on whether errors
  # were found.

The --failing option turns on --partial automatically (so that if the partial test run were to be interrupted, the failing tests that aren't run are not lost).

Another common use case is repeating a failure that occured on a remote machine (e.g. during a jenkins test run). There are two common ways to do approach this.

Firstly, if you have a subunit stream from the run you can just load it:

  $ testr load < failing-stream
  # Run the failed tests
  $ testr run --failing

The streams generated by test runs are in .testrepository/ named for their test id - e.g. .testrepository/0 is the first stream.

If you do not have a stream (because the test runner didn't output subunit or you don't have access to the .testrepository) you may be able to use a list file. If you can get a file that contains one test id per line, you can run the named tests like this:

  $ testr run --load-list FILENAME

This can also be useful when dealing with sporadically failing tests, or tests that only fail in combination with some other test - you can bisect the tests that were run to get smaller and smaller (or larger and larger) test subsets until the error is pinpointed.

`testr run --until-failure`` will run your test suite again and again and again stopping only when interrupted or a failure occurs. This is useful for repeating timing-related test failures.