Hello!
A "simple" test involving very little communication with peer services
already takes over a day on the automated test server, simply due to the
sheer number of different configurations (OS, configure options) that
are getting tested.
A "nightly" test takes an entire weekend. Clearly something needs to be
done...
Most of the tests could run in parallel, except that sometimes they
depend on limited resources, like the remote login on a service like
Google or Exchange.
These limited resources are:
* A copy of the source code which needs to be compiled.
* A unique home and working directory, including the necessary
files in the HOME and XDG dirs for the current OS.
* A chroot into the desired OS.
* External logins.
* Local port numbers (for syncevo-http-server).
* RAM and CPU.
I'm currently thinking about introducing a locking daemon which knows
about these resources and that parts of the testing need to contact
before using a resource.
We could have multiple alternatives (say, multiple accounts on the same
service). The daemon might be able to set up resources on demand
(directories, chroots).
For dynamically created resources the problem then becomes: when can
they be deleted again?
It is useful to log into the test machine after a failed test and
manually work on the problem, using or analyzing the state in which the
test failed. That means that resources must not be deleted right away
after a test quits. It also becomes necessary to re-obtain the same set
of static resources that the test used (for example, be able to use the
same external account again that the test put into SyncEvolution config
files).
Has anyone heard of such a system? Otherwise I would probably start
hacking on a solution.
My current thinking is that resources are key/value pairs, or perhaps
even a key to multiple values mapping (GOOGLE -> GOOGLE_ACCOUNT=...
GOOGLE_PASSWORD=...). A Python wrapper around some other command is
given a set of keys which it needs to lock and obtain values for. The
daemon reserves the locks as long as the wrapper runs. The wrapper then
runs the actual process, with the key/value pairs put into the
environment of that process.
The "as long as the wrapper runs" part implies that it needs to keep a
connection open to the daemon which allows the daemon to detect when the
wrapper exits or dies.
Dynamic resources need some kind of garbage collection.
To recreate the right environment, the wrapper must also be able to ask
for specific values, and logging must make it obvious which those values
are.
--
Best Regards, Patrick Ohly
The content of this message is my personal opinion only and although
I am an employee of Intel, the statements I make here in no way
represent Intel's position on the issue, nor am I authorized to speak
on behalf of Intel on this matter.
Show replies by date