summaryrefslogtreecommitdiffstats
path: root/wiki/src/blueprint/automated_builds_and_tests
diff options
context:
space:
mode:
authoranonym <anonym@riseup.net>2015-10-21 18:43:37 +0200
committeranonym <anonym@riseup.net>2015-10-21 18:43:37 +0200
commit5d1603131c8f779feecddc30ec7a26fd50caecb2 (patch)
treeba4d289bef3c998d0a7584220ebb80453483bd01 /wiki/src/blueprint/automated_builds_and_tests
parentb25d21c50db43ac0e4a5179a010af5834227b548 (diff)
parent4c64182b0bc75f876d83036a6d60731731f90bcd (diff)
Merge branch 'devel' into test/10378-fix-tails-shipped-openpgp-keys-test-is-fragiletest/10378-fix-tails-shipped-openpgp-keys-test-is-fragile
Diffstat (limited to 'wiki/src/blueprint/automated_builds_and_tests')
-rw-r--r--wiki/src/blueprint/automated_builds_and_tests/automated_tests_specs.mdwn21
-rw-r--r--wiki/src/blueprint/automated_builds_and_tests/jenkins.mdwn271
-rw-r--r--wiki/src/blueprint/automated_builds_and_tests/resources.mdwn140
3 files changed, 216 insertions, 216 deletions
diff --git a/wiki/src/blueprint/automated_builds_and_tests/automated_tests_specs.mdwn b/wiki/src/blueprint/automated_builds_and_tests/automated_tests_specs.mdwn
index b694e6a..5df035b 100644
--- a/wiki/src/blueprint/automated_builds_and_tests/automated_tests_specs.mdwn
+++ b/wiki/src/blueprint/automated_builds_and_tests/automated_tests_specs.mdwn
@@ -128,26 +128,19 @@ The test suite produces different kind of artifacts: logfiles, screen
captures for failing steps, snapshots of the test VM, and also videos of
the running test session.
-Videos may be a bit too much to keep, given they slow down the test
-suite and might take quite a bit of disk space to store. If we want to
-keep them, we may want to do so only for failing test suite runs. If we
-decide to still use them, then we probably have to wait for
-[[!tails_ticket 10001]] too be resolved.
+We can keep the video captures in the build artifacts, now that
+[[!tails_ticket 10001]] is resolved.
-Proposal for a first iteration:
+Decision:
* For green test suite run: keep the test logs (Jenkins natively do
that).
- * For red test suite run: keep the screen and video captures, the
+ * For red test suite run: keep the screenshots and video captures, the
logs and the pcap files.
-On the second iteration, we will keep video capture only for the red
-tests.
-
-The retention strategy should be the same than for the automatically
-built ISOs. In particular, we will have to pay attention to the rotation
-of videos capture (given they'll quickly bloat our storage space).
-Keeping them only for 7 days sounds reasonnable.
+In [[!tails_ticket 10155]] we calculated that we can probably keep the
+video captures for a full release cycle. This will be refine is reality
+claims the contrary after an evaluation.
# Scenarios
diff --git a/wiki/src/blueprint/automated_builds_and_tests/jenkins.mdwn b/wiki/src/blueprint/automated_builds_and_tests/jenkins.mdwn
index 1d0eb88..0e9902c 100644
--- a/wiki/src/blueprint/automated_builds_and_tests/jenkins.mdwn
+++ b/wiki/src/blueprint/automated_builds_and_tests/jenkins.mdwn
@@ -1,152 +1,41 @@
-[[!meta title="Jenkins"]]
+[[!meta title="Automated tests implementation details"]]
+
+For Jenkins resources, see [[blueprint/automated_builds_and_tests/resources]].
[[!toc levels=2]]
-Resources
-=========
-
-Miscellaneous
--------------
-
-- [Jenkins Best
- Practices](https://wiki.jenkins-ci.org/display/JENKINS/Jenkins+Best+Practices)
-- [plugins](https://wiki.jenkins-ci.org/display/JENKINS/Plugins)
- * [Git plugin](https://wiki.jenkins-ci.org/display/JENKINS/Git+Plugin)
- * [Copy Artifact
- plugin](https://wiki.jenkins-ci.org/display/JENKINS/Copy+Artifact+Plugin)
- can be used to run a test job against the result of a build job,
- e.g. for Debian packages (think Lintian) or Tails ISO images; see
- [grml's setup
- documentation](http://jenkins-debian-glue.org/getting_started/manual/)
- that uses it.
-- the [jenkins](http://jujucharms.com/charms/precise/jenkins) and
- [jenkins-slave](http://jujucharms.com/charms/precise/jenkins-slave)
- JuJu charms may be good sources of inspiration for deployment
-- [[!cpan Net-Jenkins]] (not in Debian) allows to interact with
- a Jenkins server: create and start jobs, get information about
- builds etc.
-
-Jobs management
----------------
-
-- [Job builder](http://ci.openstack.org/jenkins-job-builder/) provides
- one-way (Git to Jenkins) jobs synchronization; it's in Debian sid.
- * [configuration documentation](http://ci.openstack.org/jenkins-job-builder/configuration.html)
- * Debian uses it in their `update_jdn.sh`: it runs `jenkins-jobs
- update $config` after importing updated YAML job config files
- from Git.
- * Tor [use
- it](https://gitweb.torproject.org/project/jenkins/jobs.git/tree) too.
-- jenkins.debian.net uses the [SCM
- Sync](https://wiki.jenkins-ci.org/display/JENKINS/SCM+Sync+configuration+plugin)
- plugin, that apparently handles committing to the VCS on
- configuration changes done in the web interface, and maybe more.
-- [jenkins-yaml](https://github.com/varnish/jenkins-yaml) might make
- it easy to generate a large number of similar Jenkins jobs, e.g.
- one per branch
-- [jenkins_jobs puppet module](http://tradeshift.com/blog/tstech-managing-jenkins-job-configurations-by-puppet/)
-
-Web setup
----------
-
-### Visible read-only on the web
-
-We'd like our Jenkins instance to be visible read-only on the web.
-We'd rather not rely on Jenkins authentication / authorization to
-enforce this read-only policy. We'd rather see the frontend reverse
-proxy take care of this.
-
-The
-[`getUnprotectedRootActions()`](http://javadoc.jenkins-ci.org/jenkins/model/Jenkins.html#getUnprotectedRootActions())
-method should return the list of URL prefixes that we want to allow.
-And we could forbid anything else.
-
-The [Reverse Proxy
-Auth](https://wiki.jenkins-ci.org/display/JENKINS/Reverse+Proxy+Auth+Plugin)
-Jenkins plugin can be useful to display [an example
-usage](https://github.com/jenkinsci/reverse-proxy-auth-plugin/commit/72567a974960be2363107614ba3f705ec6e9b695)
-of this method.
-
-### Miscellaneous
-
-- [sample nginx configuration](https://wiki.jenkins-ci.org/display/JENKINS/Installing+Jenkins+on+Ubuntu)
-
-Notifications
--------------
-
-- [IRC plugin](https://wiki.jenkins-ci.org/display/JENKINS/IRC+Plugin),
- but I'm told that the jenkins email notifications are way nicer
- than what this plugin can do, so see [a better way to do
- it](http://jenkins.debian.net/userContent/setup.html#_installing_kgb_client)
-- [[!cpan Jenkins-NotificationListener]] is a server that listens to
- messages from Jenkins [Notification
- plugin](https://wiki.jenkins-ci.org/display/JENKINS/Notification+Plugin).
-
-### Notifying different people depending on what triggered the build
-
-At least the obvious candidate (Email-ext plugin) doesn't seem able to
-email different recipients depending on what triggered the build
-out-of-the-box. But apparently, one can set up two 'Script - After
-Build' email triggers in the Email-ext configuration: one emails the
-culprit, the other emails the RM. And then, they do something or not
-depending on a variable we set during the build, based on what
-triggered the build. Likely the cleaner and simpler solution.
-
-Otherwise, we could have Jenkins email some pipe script that will
-forward to the right person depending on 1. whether it's a base
-branch; and 2. whether the build was triggered by a push or by
-something else. This should work if we can get the email notification
-to pass the needed info in it. E.g. the full console output currently
-has "Started by timer" or "Started by an SCM change", but this is not
-part of the email notification. Could work, but a bit hackish and all
-kinds of things can go wrong.
-
-Also, I've seen lots of people documenting crazy similar things with
-some of these plugins: "Run Condition", "Conditional BuildStep",
-"Flexible Publish" and "Any Build step". But then it gets too
-complicated for me to dive into it right now.
-
-How others use Jenkins
-----------------------
-
-- jenkins.debian.net's:
- * [setup documentation](http://jenkins.debian.net/userContent/setup.html)
- * configuration: `git://git.debian.org/git/users/holger/jenkins.debian.net.git`
-- [Tor's jobs](https://gitweb.torproject.org/project/jenkins/jobs.git/blob/HEAD:/jobs.yaml)
-- [Ubuntu QA Jenkins instance](https://jenkins.qa.ubuntu.com/)
-- grml's Michael Prokop talks about autotesting in KVM during his
- [talk at DebConf
- 10](http://penta.debconf.org/dc10_schedule/events/547.en.html);
- they use Jenkins:
- * [Jenkins instance](http://jenkins.grml.org/)
- * [unittests](https://github.com/grml/grml-unittests)
- * [debian-glue Jenkins plugin](https://github.com/mika/jenkins-debian-glue)
- * [kantan](https://github.com/mika/kantan): simple test suite for
- autotesting using Grml and KVM
- * [Jenkins server setup documentation](https://github.com/grml/grml-server-setup/blob/master/jenkins.asciidoc)
-- [jenkinstool](http://git.gitano.org.uk/personal/liw/jenkinstool.git/)
- has the tools Lars Wirzenius uses to manage his CI (Python projects
- test suite, Debian packages, importing into reprepro, VM setup of
- all needed stuff); the whole thing is very ad-hoc but many bits
- could be used as inspiration sources.
-
-Jenkins for Perl projects
--------------------------
-
-* [a collection of links](https://wiki.jenkins-ci.org/display/JENKINS/Perl+Projects)
- on the Jenkins wiki
-* an overview of the available tools: [[!cpan Task::Jenkins]]
-* [a tutorial](https://logiclab.jira.com/wiki/display/OPEN/Continuous+Integration)
-* [another tutorial](http://alexandre-masselot.blogspot.com/2011/12/perl-hudson-continuous-testing.html)
-* use [[!cpan TAP::Formatter::JUnit]] (in Wheezy) rather than the Jenkins TAP plugin
-* use `prove --timer` to know how long each test takes
+Generating jobs
+===============
+
+We use code that lay in three different Git repositories to generate
+automatically the list of Jenkins jobs for branches that are active in
+the Tails main Git repo.
+
+The first brick is the Tails
+[[!tails_gitweb_repo pythonlib]], which extracts the list of
+active branches and output the needed informations. This list is parsed
+by the `generate_tails_iso_jobs` script run by a cronjob and deployed by
+our [[!tails_gitweb_repo puppet-tails]]
+`tails::jenkins::iso_jobs_generator` manifest.
+
+This script output yaml files compatible with
+[jenkins-job-builder](http://docs.openstack.org/infra/jenkins-job-builder).
+It creates one `project` for each active branches, which in turn uses
+three JJB `job templates` to create the three jobs for each branch: the
+ISO build one, and wrapper job that is used to start the ISO test jobs.
+
+This changes are pushed to our [[!tails_gitweb_repo jenkins-jobs]] git
+repo by the cronjob, and thanks to their automatic deployment in our
+`tails::jenkins::master` and `tails::gitolite::hooks::jenkins_jobs`
+manifests in our [[!tails_gitweb_repo puppet-tails]] repo, this new
+changes are applied automatically to our Jenkins instance.
Restarting slave VMs between jobs
----------------------------------
+=================================
This question is tracked in [[!tails_ticket 9486]].
-When we tackle [[!tails_ticket 5288]], if the test suite doesn't
+For [[!tails_ticket 5288]] to be robust enough, if the test suite doesn't
_always_ clean between itself properly (e.g. when tests simply hang
and timeout), we might want to restart `isotesterN.lizard` between
each each ISO testing job.
@@ -164,44 +53,35 @@ This was discussed at least there:
* <http://jenkins-ci.361315.n4.nabble.com/How-to-reboot-a-slave-during-a-build-td4628820.html>
* <https://stackoverflow.com/questions/5543413/reconfigure-and-reboot-a-hudson-jenkins-slave-as-part-of-a-build>
-That would maybe be the way to go, with 3 chained jobs:
+We achieve this VM reboot by using 3 chained jobs:
* First one is a wrapper and trigger 2 other jobs. It is executed on the
isotester the test job is supposed to be assigned to. It puts the
isotester in offline mode and starts the second job, blocking while
waiting for it to complete. This way this isotester is left reserved
- for the second job, and the isotester name can be passed as a build
+ while the second job run, and the isotester name can be passed as a build
parameter to the second job. This job is low prio so it waits for
other second and third type of jobs to be completed before starting its
own.
-* The second job is executed on the master (which has two build
+* The second job is executed on the master (which has 4 build
executors). This job ssh into the said isotester and issue the
- reboot. It waits a bit and put the node back online again. This jobs
- is higher prio so that it is not lagging behind other wrapper jobs in
- the queue.
+ reboot. It needs to wait a reasonable amount of time for the Jenkins
+ slave to be stopped by the shutdown process so that no jobs gets assigned
+ to this isotester meanwhile. Stoping this Jenkins slave daemon usually
+ takes a few seconds. During testing, 5 seconds proved to be enough of
+ a delay for that, and more would mean unnecessary lagging time. It then
+ put the node back online again. This job is higher prio so that it is
+ not lagging behind other wrapper jobs in the queue.
* The third job is the test job, run on the freshly started isotester.
This one is high prio too to get executed before any other wrapper
- jobs.
-
-Using some kind of queue sorting is necessary. Unfortunately, the
-[PrioritySorter
-plugin](https://wiki.jenkins-ci.org/display/JENKINS/Priority+Sorter+Plugin)
-is not well supported by the current version of JJB in Debian. We'll
-have to push upstream a fix, and meanwhile use the `raw` option trick in
-the yaml files (which itself isn't supported by JJB in Debian yet,
-hopefully the new one will leave the NEW queue soon).
-
-Another tested but non-working option was to use the Jenkins [PostBuildScript
-plugin](https://wiki.jenkins-ci.org/display/JENKINS/PostBuildScript%20Plugin)
-to issue a `shutdown -r` command at the end of the job. There are
-indications that [people are using it like
-this](https://stackoverflow.com/questions/11160363/execute-shell-script-after-post-build-in-jenkins)
-already. It's supported by JJB.
+ jobs. These jobs are set to run concurrently, so that if a first one is
+ already running, a more recent one triggered by a new build will still
+ be able to run and not be blocked by the first running one.
<a id="chain"></a>
Chaining jobs
--------------
+=============
There are several plugins that allow to chain jobs that we might use to
run the test suite job following a build job of a branch.
@@ -228,33 +108,11 @@ run the test suite job following a build job of a branch.
These are all supported by JJB v0.9+.
-One solution that could work and won't require more additionnal plugins
-to manage could be to make an extensive use of the EnvInject plugin in
-the same way we already use it to configure the notification. Then we
-would be able to simply use Jenkins' native way of chaining jobs:
-
- * At the beginning of the build job, a script (in our jenkins-tools
- repo) is collecting every necessary parameters defined in the
- automated test blueprin and outputing them in a file in the
- /build-artifacts/ directory.
- * This file is the one used by the build job, to setup the variables it
- needs (currently only $NOTIFY_TO).
- * At the end of the build job, this file is archived with the other
- artifacts.
- * At the beginning of the chained test job, this file is imported in
- the workspace along with the build artifacts. The EnvInject pre-build
- step uses it to setup the necessary variables.
-
-Where I'm not sure is that the Jenkins's native way can collaborate
-smoothly with the EnvInject plugin. Maybe the different steps we are
-talking about don't happen in an order that would fit this scenario.
-Might be that we'll have to use the ParameterizedTrigger plugin. Might
-also be that we don't need the EnvInject plugin in the test job, but
-just import the variables in the environment in the test suite wrapper
-script.
+As we'll have to pass some parameters, the ParameterizedTrigger plugin
+is the best candidate for us.
Passing parameters through jobs
--------------------------------
+===============================
We already specified what kind of informations we want to pass from the
build job to the test job.
@@ -262,14 +120,23 @@ build job to the test job.
The ParameterizedTiggerPlugin is the one usually used for that kind of
work.
-An other way that seem to be possible/used with the Jenkins native job
-chaining ability is to put the wanted parameters in a file that is
-archived with the artifacts of the upstream job. Then the downstream job
-can be configured with then EnvInject plugin we already use to set the
-necessary variables in the job environment.
+We'll use it for some basic parameter passing through jobs, but given
+the test jobs will need to know a lot of them from the build job, we'll
+also use the EnvInject plugin we're already using:
+
+ * In the build job, a script will collect every necessary parameters
+ defined in the automated test blueprint and outputing them in a file
+ in the /build-artifacts/ directory.
+ * This file is the one used by the build job, to setup the variables it
+ needs (currently only $NOTIFY_TO).
+ * At the end of the build job, this file is archived with the other
+ artifacts.
+ * At the beginning of the chained test job, this file is imported in
+ the workspace along with the build artifacts. The EnvInject pre-build
+ step uses it to setup the necessary variables.
Define which $OLD_ISO to test against
--------------------------------------
+=====================================
It appeared in [[!tails_ticket 10117]] that this question is not so
obvious and easy to address.
@@ -296,19 +163,19 @@ we'll have to merge the base branch before we look at that config
setting (because for some reason the base branch might itself require
old ISO = same).
-As a first baby step, we will by default use the same ISO for both
-`--old-iso` and `--iso`, except for the branches used to prepare
-releases (`devel` and `stable`), so that we
-know if the upgrades are broken long before the next release.
-
Another option that could be considered, using existing code in the repo: use the
`OLD_TAILS_ISO` flag present in `config/default.yml`: when we release we
set its value to the released ISO, and for some branch that need it we
empty this variable so that the test use the same ISO for both
`--old-iso` and `--iso`.
+In the end, we will by default use the same ISO for both `--old-iso` and
+`--iso`, except for the branches used to prepare releases (`devel` and
+`stable`), so that we know if the upgrades are broken long before the
+next release.
+
Retrieving the ISOs for the test
---------------------------------
+================================
We'll need a way to retrieve the different ISO needed for the test.
@@ -323,4 +190,4 @@ For the last release ISO, we have several means:
vhost for the isotesters.
* Using the git-annex repo directly.
-The former is probably the most simple to use.
+We'll use the first one, as it's easier to implement.
diff --git a/wiki/src/blueprint/automated_builds_and_tests/resources.mdwn b/wiki/src/blueprint/automated_builds_and_tests/resources.mdwn
new file mode 100644
index 0000000..0368eb6
--- /dev/null
+++ b/wiki/src/blueprint/automated_builds_and_tests/resources.mdwn
@@ -0,0 +1,140 @@
+[[!meta title="Jenkins resources"]]
+
+[[!toc levels=2]]
+
+Miscellaneous
+=============
+
+- [Jenkins Best
+ Practices](https://wiki.jenkins-ci.org/display/JENKINS/Jenkins+Best+Practices)
+- [plugins](https://wiki.jenkins-ci.org/display/JENKINS/Plugins)
+ * [Git plugin](https://wiki.jenkins-ci.org/display/JENKINS/Git+Plugin)
+ * [Copy Artifact
+ plugin](https://wiki.jenkins-ci.org/display/JENKINS/Copy+Artifact+Plugin)
+ can be used to run a test job against the result of a build job,
+ e.g. for Debian packages (think Lintian) or Tails ISO images; see
+ [grml's setup
+ documentation](http://jenkins-debian-glue.org/getting_started/manual/)
+ that uses it.
+- the [jenkins](http://jujucharms.com/charms/precise/jenkins) and
+ [jenkins-slave](http://jujucharms.com/charms/precise/jenkins-slave)
+ JuJu charms may be good sources of inspiration for deployment
+- [[!cpan Net-Jenkins]] (not in Debian) allows to interact with
+ a Jenkins server: create and start jobs, get information about
+ builds etc.
+
+Jobs management
+===============
+
+- [Job builder](http://ci.openstack.org/jenkins-job-builder/) provides
+ one-way (Git to Jenkins) jobs synchronization; it's in Debian sid.
+ * [configuration documentation](http://ci.openstack.org/jenkins-job-builder/configuration.html)
+ * Debian uses it in their `update_jdn.sh`: it runs `jenkins-jobs
+ update $config` after importing updated YAML job config files
+ from Git.
+ * Tor [use
+ it](https://gitweb.torproject.org/project/jenkins/jobs.git/tree) too.
+- jenkins.debian.net uses the [SCM
+ Sync](https://wiki.jenkins-ci.org/display/JENKINS/SCM+Sync+configuration+plugin)
+ plugin, that apparently handles committing to the VCS on
+ configuration changes done in the web interface, and maybe more.
+- [jenkins-yaml](https://github.com/varnish/jenkins-yaml) might make
+ it easy to generate a large number of similar Jenkins jobs, e.g.
+ one per branch
+- [jenkins_jobs puppet module](http://tradeshift.com/blog/tstech-managing-jenkins-job-configurations-by-puppet/)
+
+Web setup
+=========
+
+### Visible read-only on the web
+
+We'd like our Jenkins instance to be visible read-only on the web.
+We'd rather not rely on Jenkins authentication / authorization to
+enforce this read-only policy. We'd rather see the frontend reverse
+proxy take care of this.
+
+The
+[`getUnprotectedRootActions()`](http://javadoc.jenkins-ci.org/jenkins/model/Jenkins.html#getUnprotectedRootActions())
+method should return the list of URL prefixes that we want to allow.
+And we could forbid anything else.
+
+The [Reverse Proxy
+Auth](https://wiki.jenkins-ci.org/display/JENKINS/Reverse+Proxy+Auth+Plugin)
+Jenkins plugin can be useful to display [an example
+usage](https://github.com/jenkinsci/reverse-proxy-auth-plugin/commit/72567a974960be2363107614ba3f705ec6e9b695)
+of this method.
+
+### Miscellaneous
+
+- [sample nginx configuration](https://wiki.jenkins-ci.org/display/JENKINS/Installing+Jenkins+on+Ubuntu)
+
+Notifications
+=============
+
+- [IRC plugin](https://wiki.jenkins-ci.org/display/JENKINS/IRC+Plugin),
+ but I'm told that the jenkins email notifications are way nicer
+ than what this plugin can do, so see [a better way to do
+ it](http://jenkins.debian.net/userContent/setup.html#_installing_kgb_client)
+- [[!cpan Jenkins-NotificationListener]] is a server that listens to
+ messages from Jenkins [Notification
+ plugin](https://wiki.jenkins-ci.org/display/JENKINS/Notification+Plugin).
+
+### Notifying different people depending on what triggered the build
+
+At least the obvious candidate (Email-ext plugin) doesn't seem able to
+email different recipients depending on what triggered the build
+out-of-the-box. But apparently, one can set up two 'Script - After
+Build' email triggers in the Email-ext configuration: one emails the
+culprit, the other emails the RM. And then, they do something or not
+depending on a variable we set during the build, based on what
+triggered the build. Likely the cleaner and simpler solution.
+
+Otherwise, we could have Jenkins email some pipe script that will
+forward to the right person depending on 1. whether it's a base
+branch; and 2. whether the build was triggered by a push or by
+something else. This should work if we can get the email notification
+to pass the needed info in it. E.g. the full console output currently
+has "Started by timer" or "Started by an SCM change", but this is not
+part of the email notification. Could work, but a bit hackish and all
+kinds of things can go wrong.
+
+Also, I've seen lots of people documenting crazy similar things with
+some of these plugins: "Run Condition", "Conditional BuildStep",
+"Flexible Publish" and "Any Build step". But then it gets too
+complicated for me to dive into it right now.
+
+How others use Jenkins
+======================
+
+- jenkins.debian.net's:
+ * [setup documentation](http://jenkins.debian.net/userContent/setup.html)
+ * configuration: `git://git.debian.org/git/users/holger/jenkins.debian.net.git`
+- [Tor's jobs](https://gitweb.torproject.org/project/jenkins/jobs.git/blob/HEAD:/jobs.yaml)
+- [Ubuntu QA Jenkins instance](https://jenkins.qa.ubuntu.com/)
+- grml's Michael Prokop talks about autotesting in KVM during his
+ [talk at DebConf
+ 10](http://penta.debconf.org/dc10_schedule/events/547.en.html);
+ they use Jenkins:
+ * [Jenkins instance](http://jenkins.grml.org/)
+ * [unittests](https://github.com/grml/grml-unittests)
+ * [debian-glue Jenkins plugin](https://github.com/mika/jenkins-debian-glue)
+ * [kantan](https://github.com/mika/kantan): simple test suite for
+ autotesting using Grml and KVM
+ * [Jenkins server setup documentation](https://github.com/grml/grml-server-setup/blob/master/jenkins.asciidoc)
+- [jenkinstool](http://git.gitano.org.uk/personal/liw/jenkinstool.git/)
+ has the tools Lars Wirzenius uses to manage his CI (Python projects
+ test suite, Debian packages, importing into reprepro, VM setup of
+ all needed stuff); the whole thing is very ad-hoc but many bits
+ could be used as inspiration sources.
+
+Jenkins for Perl projects
+=========================
+
+* [a collection of links](https://wiki.jenkins-ci.org/display/JENKINS/Perl+Projects)
+ on the Jenkins wiki
+* an overview of the available tools: [[!cpan Task::Jenkins]]
+* [a tutorial](https://logiclab.jira.com/wiki/display/OPEN/Continuous+Integration)
+* [another tutorial](http://alexandre-masselot.blogspot.com/2011/12/perl-hudson-continuous-testing.html)
+* use [[!cpan TAP::Formatter::JUnit]] (in Wheezy) rather than the Jenkins TAP plugin
+* use `prove --timer` to know how long each test takes
+