Author: | Frank Warmerdam |
---|---|
Contact: | warmerdam at pobox.com |
Revision: | $Revision: 8365 $ |
Date: | $Date: 2008-12-31 07:49:02 -0800 (Wed, 31 Dec 2008) $ |
Last Updated: | 2007/8/31 |
Table of Contents
The msautotest is a suite of test maps, data files, expected result images, and test scripts intended to make it easy to run an a set of automated regression tests on MapServer.
The autotest is available from SVN. On Unix it could be fetched something like:
% svn checkout http://svn.osgeo.org/mapserver/trunk/msautotest
This would create an msautotest subdirectory whereever you are. I normally put the autotest within my MapServer directory.
The autotest requires python (but not python MapScript), so if you don’t have python on your system - get and install it. More information on python is available at http://www.python.org. Most Linux system have some version already installed.
The autotest also requires that the executables built with MapServer, notably shp2img, legend, mapserv and scalebar, are available in the path. I generally accomplish this by adding the MapServer build directory to my path.
csh:
% setenv PATH $HOME/mapserver:$PATH
bash/sh:
% PATH=$HOME/mapserver:$PATH
Verify that you can run stuff by typing ‘shp2img -v’ in the autotest directory:
warmerda@gdal2200[152]% shp2img -v
MapServer version 3.7 (development) OUTPUT=PNG OUTPUT=JPEG OUTPUT=WBMP
SUPPORTS=PROJ SUPPORTS=TTF SUPPORTS=WMS_SERVER SUPPORTS=GD2_RGB
INPUT=TIFF INPUT=EPPL7 INPUT=JPEG INPUT=OGR INPUT=GDAL INPUT=SHAPEFILE
Now you are ready to run the tests. The tests are subdivided into categories, currently just “gdal”, “misc”, and “wxs” each as a subdirectory. To run the “gdal” tests cd into the gdal directory and run the run_test.py script.
Unix:
./run_test.py
Windows:
python.exe run_test.py
The results in the misc directory might look something like this:
warmerda@gdal2200[164]% run_test.py
version = MapServer version 3.7 (development) OUTPUT=PNG OUTPUT=JPEG OUTPUT=WBMP
SUPPORTS=PROJ SUPPORTS=TTF SUPPORTS=WMS_SERVER SUPPORTS=GD2_RGB INPUT=TIFF
INPUT=EPPL7 INPUT=JPEG INPUT=OGR INPUT=GDAL INPUT=SHAPEFILE
Processing: rgba_scalebar.map
results match.
Processing: tr_scalebar.map
results match.
Processing: tr_label_rgb.map
results match.
Processing: ogr_direct.map
results match.
Processing: ogr_select.map
results match.
Processing: ogr_join.map
results match.
Test done:
0 tested skipped
6 tests succeeded
0 tests failed
0 test results initialized
In general you are hoping to see that no tests failed.
Because most msautotest tests are comparing generated images to expected images, the tests are very sensitive to subtle rounding differences on different systems, and subtle rendering changes in libraries like freetype and gd. So it is quite common to see some failures.
These failures then need to be reviewed manually to see if the differences are acceptable and just indicating differences in rounding/rendering or whether they are “real” bugs. This is normally accomplished by visually comparing files in the “result” directory with the corrresponding file in the “expected” directory. It is best if this can be done in an application that allows images to be layers, and toggled on and off to visually highlight what is changing. OpenEV can be used for this.
If you install the PerceptualDiff program (http://pdiff.sourceforge.net/) and it is in the path, then the autotest will attempt to use it as a last fallback when comparing images. If images are found to be “perceptually” the same the test will pass with the message “result images perceptually match, though files differ.” This can dramatically cut down the number of apparent failures that on close inspection are for all intents and purposes identical. Building PerceptualDiff is a bit of a hassle.
The msautotest suite was initially developed by Frank Warmerdam (warmerdam at pobox.com), who can be contacted with questions it.
The msautotest suite is organized as a series of .map files. The python scripts basically scan the directory in which they are run for files ending in .map. They are then “run” with the result dumped into a file in the result directory. A binary comparison is then done to the corresponding file in the expected directory and differences are reported. The general principles for the test suite are that:
Because MapServer can be built with many possible extensions, such as support for OGR, GDAL, and PROJ.4, it is desirable to have the testsuite automatically detect which tests should be run based on the configuratio of MapServer. This is accomplished by capturing the version output of “shp2img -v” and using the various keys in that to decide which tests can be run. A directory can have a file called “all_require.txt” with a “REQUIRES:” line indicating components required for all tests in the directory. If any of these requirements are not met, no tests at all will be run in this directory. For instance, the gdal/all_require.txt lists:
REQUIRES: INPUT=GDAL OUTPUT=PNG
In addition, individual .map files can have additional requirements expressed as a REQUIRES: comment in the mapfile. If the requirements are not met the map will be skipped (and listsed in the summary as a skipped test). For example gdal/256_overlay_res.map has the following line to indicate it requires projection support (in addition to the INPUT=GDAL and OUTPUT=PNG required by all files in the directory):
REQUIRES: SUPPORTS=PROJ
The output files generated by processing a map is put in the file results/<mapfilebasename>.png (regardless of whether it is PNG or not). So when gdal/256_overlay_res.map is processed, the output file is written to gdal/results/256_overlay_res.png. This is compared to gdal/expected/256_overlay_res.png. If they differ the test fails, and the “wrong” result file is left behind for investigation. If they match the result file is deleted. If there is no corresponding expected file the results file is moved to the expected directory (and reported as an “initialized” test) ready to be committed to CVS.
There is also a RUN_PARMS keyword that may be placed in map files to override a bunch of behaviour. The default behaviour is to process map files with shp2img, but other programs such as mapserv or scalebar can be requested, and various commandline arguments altered as well as the name of the output file. For instance, the following line in misc/tr_scalebar.map indicates that the output file should be called tr_scalebar.png, the commandline should look like “[SCALEBAR] [MAPFILE] [RESULT]” instead of the default “[SHP2IMG] -m [MAPFILE] -o [RESULT]”.
RUN_PARMS: tr_scalebar.png [SCALEBAR] [MAPFILE] [RESULT]
For testing things as they would work from an HTTP request, use the RUN_PARMS with the program [MAPSERV] and the QUERY_STRING argument, with results redirected to a file.
# RUN_PARMS: wcs_cap.xml [MAPSERV] QUERY_STRING='map=[MAPFILE]&SERVICE=WCS&VERSION=1.0.0&REQUEST=GetCapabilities' > [RESULT]
For web services that generate images that would normally be prefixed with the Content-type header, use [RESULT_NOMIME] to instruct the test harnass to script off any http headers before doing the comparison.
# Generate simple PNG.
# RUN_PARMS: wcs_simple.png [MAPSERV] QUERY_STRING='map=[MAPFILE]&SERVICE=WCS&VERSION=1.0.0&REQUEST=GetCoverage&WIDTH=120&HEIGHT=90&FORMAT=GDPNG&BBOX=0,0,400,300&COVERAGE=grey&CRS=EPSG:32611' > [RESULT_DEMIME]
When running the test suite, it is common for some tests to fail depending on vagaries of floating point on the platform, or harmless changes in MapServer. To identify these compare the results in result with the file in expected and determine what the differences are. If there is just a slight shift in text or other features it is likely due to floating point differences on different platforms. These can be ignored. If something has gone seriously wrong, then track down the problem!
It is also prudent to avoid using output image formats that are platform specific. For instance, if you produce TIFF it will generate big endian on big endian systems and therefore be different at the binary level from what was expected. PNG should be pretty safe.
% svn add mynewtest.map expected/mynewtest.png
% svn commit -m "new" mynewtest.map expected/mynewtest.png
You’re done!