|
Once you have started tclsh and have loaded the test.tcl source file (see Running the test suite under UNIX and Running the test suite under Windows for more information), you are ready to run the test suite. At the tclsh prompt, to run the standard test suite, enter the following:
% run_std
A more exhaustive version of the test suite runs all the tests several more times, testing encryption, replication, and different page sizes. After you have a clean run for run_std, you may choose to run this lengthier set of tests. At the tclsh prompt, enter:
% run_all
Running the standard tests can take from several hours to a few days to complete, depending on your hardware, and running all the tests will take at least twice as long. For this reason, the output from these commands are redirected to a file in the current directory named ALL.OUT. Periodically, a line will be written to the standard output, indicating what test is being run. When the test suite has finished, a final message will be written indicating the test suite has completed successfully or that it has failed. If the run failed, you should review the ALL.OUT file to determine which tests failed. Errors will appear in that file as output lines, beginning with the string "FAIL".
Tests are run in the directory TESTDIR, by default. However, the test files are often large, and you should use a filesystem with at least several hundred megabytes of free space. To use a different directory for the test directory, edit the file include.tcl in your build directory, and change the following line to a more appropriate value for your system:
set testdir ./TESTDIR
For example, you might change it to the following:
set testdir /var/tmp/db.test
Alternatively, you can create a symbolic link named TESTDIR in your build directory to an appropriate location for running the tests. Regardless of where you run the tests, the TESTDIR directory should be on a local filesystem. Using a remote filesystem (for example, an NFS mounted filesystem) will almost certainly cause spurious test failures.
Copyright (c) 1996-2003 Sleepycat Software, Inc. - All rights reserved.