Summary: Perl test script bugs/enhancements
Submitted by: wb8tyw
Submitted on: Tue 14 Oct 2014 01:01:53 PM GMT
Severity: 3 - Normal
Item Group: Bug
Assigned to: None
Discussion Lock: Any
Component Version: 4.1
Operating System: Any
Fixed Release: None
Triage Status: None
1. The run_perl_tests.pl does not print out the number of tests attempted
unless all tests pass. It should always print the number of tests attempted.
This makes it harder to determine how good or bad results are until all tests
2. When a test script skips an individual test, the count of failed tests at
the end of the run is changed by the number of tests skipped which is
incorrect because failed tests are counted differently.
If you manually add up the number of tests failed from the report you will get
a different count than on the summary if any scripts have individual tests
skipped. This is probably only noticed on non-Unix platforms.
3. It would be nice for tests to check the test working directory for files
that were not present before the test after each test. VMS also needs to
check "/tmp" and "sys$scratch:" if different than "/tmp". This is especially
needed for VMS because make creates helper scripts in 'sys$scratch:' and their
are still bugs in the VMS port where extra files are created or not cleaned
4. For each test in a script, the possible status values should be: Pass,
Fail, xPass, xFail, and skipped.
xFail is for something known to be broken or unimplemented.
skipped is for a test that should never be run on a platform usually because
it would never be applicable.
xPass is short for unexpected passes, where a test passed even though it was
expected to fail. This will catch if someone fixes a platform specific issue,
but does not update the test scripts to match.
On the individual tests, it should report if any xFail tests actually passed,
as it is possible that they could get fixed as a side effect of a different
The summary should always report:
Total count of tests run.
Count of tests passed (includes tests unexpectedly passed)
Count of tests failed (includes tests unexpectedly failed)
Count of tests skipped.
It should then sanity check that Passed + failed + skipped == tests run.
It should then also report the number of expected test failures and the number
of unexpected test passes.