...one of the most highly
regarded and expertly designed C++ library projects in the
world.
— Herb Sutter and Andrei
Alexandrescu, C++
Coding Standards
That's it! You don't even need a CVS client installed.
Optional: If you already have bjam and/or process_jam_log executables you'd like to use, just put them in the same directory with regression.py, e.g.:
my_boost_regressions/ regression.py bjam[.exe]
To start a regression run, simply run regression.py providing it with the following two arguments:
For example:
python regression.py --runner=Metacomm --toolsets=gcc,vc7
If you are interested in seeing all available options, run python regression.py or python regression.py --help. See also the Advanced use section below.
Note: If you are behind a firewall/proxy server, everything should still "just work". In the rare cases when it doesn't, you can explicitly specify the proxy server parameters through the --proxy option, e.g.:
python regression.py ... --proxy=http://www.someproxy.com:3128
The regression run procedure will:
The report merger process running continuously on MetaCommunications site will merge all submitted test runs and publish them at http://boost.sourceforge.net/regression-logs/developer.
Once you have your regression results displayed in the Boost-wide reports, you may consider providing a bit more information about yourself and your test environment. This additional information will be presented in the reports on a page associated with your runner ID.
By default, the page's content is just a single line coming from the comment.html file in your regression.py directory, specifying the tested platform. You can put online a more detailed description of your environment, such as your hardware configuration, compiler builds, and test schedule, by simply altering the file's content. Also, please consider providing your name and email address for cases where Boost developers have questions specific to your particular set of results.
You can run regression.py in incremental mode [4] by simply passing it an identically named command-line flag:
python regression.py ... --incremental
Depending on the environment/C++ runtime support library the test is compiled with, a test failure/termination may cause an appearance of a dialog window, requiring human intervention to proceed. Moreover, the test (or even of the compiler itself) can fall into infinite loop, or simply run for too long. To allow regression.py to take care of these obstacles, add the --monitored flag to the script invocation:
python regression.py ... --monitored
That's it. Knowing your intentions, the script will be able to automatically deal with the listed issues [5].
If you already have a CVS client installed and configured, you might prefer to get the sources directly from the Boost CVS repository. To communicate this to the script, you just need to pass it your SourceForge user ID using the --user option; for instance:
python regression.py ... --user=agurtovoy
You can also specify the user as anonymous, requesting anonymous CVS access. Note, though, that the files obtained this way tend to lag behind the actual CVS state by several hours, sometimes up to twelve. By contrast, the tarball the script downloads by default is at most one hour behind.
Even if you've already been using a custom driver script, and for some reason you don't want regression.py to take over of the entire test cycle, getting your regression results into Boost-wide reports is still easy!
In fact, it's just a matter of modifying your script to perform two straightforward operations:
Timestamp file creation needs to be done before the CVS update/checkout. The file's location doesn't matter (nor does the content), as long as you know how to access it later. Making your script to do something as simple as echo >timestamp would work just fine.
Collecting and uploading logs can be done any time after process_jam_log' s run, and is as simple as an invocation of the local copy of $BOOST_ROOT/tools/regression/xsl_reports/runner/collect_and_upload_logs.py script that was just obtained from the CVS with the rest of the sources. You'd need to provide collect_and_upload_logs.py with the following three arguments:
--locate-root directory to to scan for "test_log.xml" files --runner runner ID (e.g. "Metacomm") --timestamp path to a file which modification time will be used as a timestamp of the run ("timestamp" by default)
For example, assuming that the run's resulting binaries are in the $BOOST_ROOT/bin directory (the default Boost.Build setup), the collect_and_upload_logs.py invocation might look like this:
python $BOOST_ROOT/tools/regression/xsl_reports/runner/collect_and_upload_logs.py --locate-root=$BOOST_ROOT/bin --runner=Metacomm --timestamp=timestamp
You might encounter an occasional need to make local modifications to the Boost codebase before running the tests, without disturbing the automatic nature of the regression process. To implement this under regression.py:
The driver will check for the existence of the patch_boost script, and, if found, execute it after obtaining the Boost sources.
Please send all comments/suggestions regarding this document and the testing procedure itself to the Boost Testing list.
[1] | If you are running regressions interlacingly with a different set of compilers (e.g. for Intel in the morning and GCC at the end of the day), you need to provide a different runner id for each of these runs, e.g. your_name-intel, and your_name-gcc. |
[2] | The limitations of the reports' format/medium impose a direct dependency between the number of compilers you are testing with and the amount of space available for your runner id. If you are running regressions for a single compiler, please make sure to choose a short enough id that does not significantly disturb the reports' layout. |
[3] | If --toolsets option is not provided, the script will try to use the platform's default toolset (gcc for most Unix-based systems). |
[4] | By default, the script runs in what is known as full mode: on each regression.py invocation all the files that were left in place by the previous run -- including the binaries for the successfully built tests and libraries -- are deleted, and everything is rebuilt once again from scratch. By contrast, in incremental mode the already existing binaries are left intact, and only the tests and libraries which source files has changed since the previous run are re-built and re-tested. The main advantage of incremental runs is a significantly shorter turnaround time, but unfortunately they don't always produce reliable results. Some type of changes to the codebase (changes to the bjam testing subsystem in particular) often require switching to a full mode for one cycle in order to produce trustworthy reports. As a general guideline, if you can afford it, testing in full mode is preferable. |
[5] | Note that at the moment this functionality is available only if you are running on a Windows platform. Contributions are welcome! |