A Testscenario for EXPath Packages
Wikicommons, File:Hybridlll.jpg, Public Domain
Since a couple of years we prepare applications for the eXist-db ecosystem based on the EXPath Package standard. Last week we setup the first test environment. No questions; no excuses: Tests are extremely important! In some cases one can start developing a software by specifying a test, so called Test Driven Development. Tests can make you happy: when you check in the work of a whole week, the brand new mind-blowing feature that breaks your application in a routine you never expected to be there. Your test results provide you with confidence! You can be sure that all functionality works as expected when you ship your software to the client – or not, when the show is stopped by just another bug.
Thats why tests should be an essential part of your work and here is an example how you can use your CI server to test an EXPath Package. Usually these applications are deployed into a database to use the power of the xmldb-API. In eXist-db you are able to deliver the fontend application (website) together with the backend engine. To run tests, you simply have to start the database and call a test script. This is not that easy, when you use a non-dockerized Jenkins CI server to build the application, as we did before. Within the shell executor of Jenkins, the database was up and running for all users on the system - so you have to make sure that there will be no port conflicts. Not now and not in the future. It is impossible to set up a reliable system on that base. But with Docker this is an out-of-the-box feature. Recently we moved to a Docker enabled GitLab Runner.
How to test an XQuery Application
Within eXist-db there is the XQSuite available - a tool that allows unit tests to be placed as an annotation in the corresponding functions’ header. A typical function in XQuery is written like this:
declare function namespace:functionname($parameter){
"Super" || $parameter
};
The function will simply concatenate the word «Super» with whatever you put in. But wait! This function is really creepy. Not for what it does, but for its style. It lacks of any best practice: an untyped parameter, an untyped return value, there is no documentation, there are no semantics in the name and even worse: there are no unit tests! If your XQuery functions looks alike, you know what you have to do.
(:~ Appends the given input to the word «Super».
: @param $input-to-append-to-super – any string you like to append on «Super»
: @return a concatenation of «Super» and the $input-to-append-to-super
: @author John Doe
: @since 0.1
:)
declare
%test:arg("input-to-append-to-super", "") %test:assertEquals("Super")
%test:arg("input-to-append-to-super", "weasel") %test:assertEquals("Superweasel")
%test:arg("input-to-append-to-super", 1) %test:assertEquals("Super1")
function namespace:append-to-Super($input-to-append-to-super as xs:string) as xs:string {
"Super" || $input-to-append-to-super
};
Now we have a code base. The expected result of the test is:
<testsuites>
<testsuite package="https://your.super.namespace.edu/tests" timestamp="2018-02-29T20:36:12.194-01:00" failures="0" pending="0" tests="3" time="PT0.223S">
<testcase name="append-to-Super" class="tests:append-to-Super"/>
<testcase name="append-to-Super" class="tests:append-to-Super"/>
<testcase name="append-to-Super" class="tests:append-to-Super"/>
</testsuite>
</testsuites>
How to put this in a test env?
According to the documentation you have to call a XQuery to start a test run. When you prepared a new module with test-annotated wrapper functions (like in the example by JoeWiz here) it may be looks like this:
xquery version "3.1";
import module namespace test="https://exist-db.org/xquery/xqsuite" at "resource:org/exist/xquery/lib/xqsuite/xqsuite.xql";
import module namespace tests="https://your.super.namespace.edu/tests" at "test.xqm";
test:suite(util:list-functions("https://your.super.namespace.edu/tests"))
That will trigger the test. You only need an up and running database.
How and when to trigger the test?
An implementation of the test evaluation starts on installation. In repo.xml
a pointer to a
post-installation script is set <finish>post-install.xq</finish>
so the
named script is evaluated after the package and all requirements are installed.
Besides some more steps, within post-install.xq
you need the following:
(: run tests on GitLab Runner :)
let $jobId := try {file:read("/tmp/ci.job") => xs:int()} catch * { 0 }
return
if ($jobId gt 0)
then
(
let $tests := util:eval(xs:anyURI('test.xq')),
$file-name := system:get-exist-home()||"/../sade_job-"||string($jobId)||".log.xml",
$file := file:serialize(<tests time="{current-dateTime()}">{$tests}</tests>, $file-name, ())
return
(
util:log-system-out("wrote test results to " || $file-name),
system:shutdown(15)
)
)
else
util:log-system-out("CI_JOB_ID: not found; not on a GitLab Runner")
)
When eXist-db starts up, it usually looks at the autodeploy
directory and
checks for new (previously not installed) packages. When it starts for the first
time, it obviously installs all packages = autodeploy. When this comes to our
package here, it will test for a specific value ($CI_JOB_ID
) written to a
file at /tmp/ci.job
(this is done via .gitlab-ci.yml
: echo -n "$CI_JOB_ID" > /tmp/ci.job
).
The script will store all results in a file one directory below $EXIST_HOME
.
Unfortunately the system:shutdown()
does not work, so have to trigger a
shutdown via CI script. Currently, this is done just 60 seconds after the launch
– a sufficient amount of time. As an alternative we trigger the test via a
call to the RESTXQ-API. So we can shut down the database safely when all tests
are done.
Results
So far i added 27 tests to test.xqm for SADE. The results are available within GitLab as i configured the output to be an artifact. Or you may be directly view them from the jobs console log.
Had a look at the file? Found something strange? … … …
Oh there is one test failing! Yes, this is because i do not trust the test engine. I want to be sure that unsuccessful tests are recognized as failing. Thats why there is one test meant to fail, so we are sure that they are treated in a correct manner.
But one last step is missing:
Test result evaluation
When we have collected all the results, we finally have to evaluate, because the GitLab Runner should be able to detect failed tests and stop preparing the pipeline, marking all artifacts prepared so far as «failed». This is done with an additional bash script that checks for exactly one failure.
TEST=$(grep --no-filename --only-matching --extended-regexp \
"failures=\"[0-9]+\"" build/sade_job-*.xml \
| grep --only-matching --extended-regexp "[0-9]+" \
| paste -sd+ \
| bc )
if [ "$TEST" -ne "1" ]
then
echo "there are failing tests."; exit 1
else
echo "no failures found. good."; exit 0
fi
This gives an exit code 1
that will be realized as one failing script and
forces the Runner to stop all scripts and pending jobs. So we mark the build as
failure.
The application to test is referenced above, the complete environment is set up
with ant
tasks and everything else you can find there in the .gitlab-ci.yml
.
Sleep well little developer. Your application is safe now.