- [Jenkins Best
* [Git plugin](https://wiki.jenkins-ci.org/display/JENKINS/Git+Plugin)
* [Copy Artifact
can be used to run a test job against the result of a build job,
e.g. for Debian packages (think Lintian) or Tails ISO images; see
that uses it.
- the [jenkins](http://jujucharms.com/charms/precise/jenkins) and
JuJu charms may be good sources of inspiration for deployment
- [[!cpan Net-Jenkins]] (not in Debian) allows to interact with
a Jenkins server: create and start jobs, get information about
- [Job builder](http://ci.openstack.org/jenkins-job-builder/) provides
one-way (Git to Jenkins) jobs synchronization; it's in Debian sid.
* [configuration documentation](http://ci.openstack.org/jenkins-job-builder/configuration.html)
* Debian uses it in their `update_jdn.sh`: it runs `jenkins-jobs
update $config` after importing updated YAML job config files
* Tor [use
- jenkins.debian.net uses the [SCM
plugin, that apparently handles committing to the VCS on
configuration changes done in the web interface, and maybe more.
- [jenkins-yaml](https://github.com/varnish/jenkins-yaml) might make
it easy to generate a large number of similar Jenkins jobs, e.g.
one per branch
- [jenkins_jobs puppet module](http://tradeshift.com/blog/tstech-managing-jenkins-job-configurations-by-puppet/)
### Visible read-only on the web
We'd like our Jenkins instance to be visible read-only on the web.
We'd rather not rely on Jenkins authentication / authorization to
enforce this read-only policy. We'd rather see the frontend reverse
proxy take care of this.
method should return the list of URL prefixes that we want to allow.
And we could forbid anything else.
The [Reverse Proxy
Jenkins plugin can be useful to display [an example
of this method.
- [sample nginx configuration](https://wiki.jenkins-ci.org/display/JENKINS/Installing+Jenkins+on+Ubuntu)
- [IRC plugin](https://wiki.jenkins-ci.org/display/JENKINS/IRC+Plugin),
but I'm told that the jenkins email notifications are way nicer
than what this plugin can do, so see [a better way to do
- [[!cpan Jenkins-NotificationListener]] is a server that listens to
messages from Jenkins [Notification
### Notifying different people depending on what triggered the build
At least the obvious candidate (Email-ext plugin) doesn't seem able to
email different recipients depending on what triggered the build
out-of-the-box. But apparently, one can set up two 'Script - After
Build' email triggers in the Email-ext configuration: one emails the
culprit, the other emails the RM. And then, they do something or not
depending on a variable we set during the build, based on what
triggered the build. Likely the cleaner and simpler solution.
Otherwise, we could have Jenkins email some pipe script that will
forward to the right person depending on 1. whether it's a base
branch; and 2. whether the build was triggered by a push or by
something else. This should work if we can get the email notification
to pass the needed info in it. E.g. the full console output currently
has "Started by timer" or "Started by an SCM change", but this is not
part of the email notification. Could work, but a bit hackish and all
kinds of things can go wrong.
Also, I've seen lots of people documenting crazy similar things with
some of these plugins: "Run Condition", "Conditional BuildStep",
"Flexible Publish" and "Any Build step". But then it gets too
complicated for me to dive into it right now.
How others use Jenkins
* [setup documentation](http://jenkins.debian.net/userContent/setup.html)
* configuration: `git://git.debian.org/git/users/holger/jenkins.debian.net.git`
- [Tor's jobs](https://gitweb.torproject.org/project/jenkins/jobs.git/blob/HEAD:/jobs.yaml)
- [Ubuntu QA Jenkins instance](https://jenkins.qa.ubuntu.com/)
- grml's Michael Prokop talks about autotesting in KVM during his
[talk at DebConf
they use Jenkins:
* [Jenkins instance](http://jenkins.grml.org/)
* [debian-glue Jenkins plugin](https://github.com/mika/jenkins-debian-glue)
* [kantan](https://github.com/mika/kantan): simple test suite for
autotesting using Grml and KVM
* [Jenkins server setup documentation](https://github.com/grml/grml-server-setup/blob/master/jenkins.asciidoc)
has the tools Lars Wirzenius uses to manage his CI (Python projects
test suite, Debian packages, importing into reprepro, VM setup of
all needed stuff); the whole thing is very ad-hoc but many bits
could be used as inspiration sources.
Jenkins for Perl projects
* [a collection of links](https://wiki.jenkins-ci.org/display/JENKINS/Perl+Projects)
on the Jenkins wiki
* an overview of the available tools: [[!cpan Task::Jenkins]]
* [a tutorial](https://logiclab.jira.com/wiki/display/OPEN/Continuous+Integration)
* [another tutorial](http://alexandre-masselot.blogspot.com/2011/12/perl-hudson-continuous-testing.html)
* use [[!cpan TAP::Formatter::JUnit]] (in Wheezy) rather than the Jenkins TAP plugin
* use `prove --timer` to know how long each test takes
Restarting slave VMs between jobs
This question is tracked in [[!tails_ticket 9486]].
When we tackle [[!tails_ticket 5288]], if the test suite doesn't
_always_ clean between itself properly (e.g. when tests simply hang
and timeout), we might want to restart `isotesterN.lizard` between
each each ISO testing job.
If such VMs are Jenkins slave, then we can't do it as part of the job
itself, but workarounds are possible, such as having a job restart and
wait for the VM, that triggers another job that actually starts the
tests. Or, instead of running `jenkins-slave` on those VMs, running
one instance thereof somewhere else (in a Docker container on
`jenkins.lizard`?) and then have "restart the testing VM and wait for
it to come up" be part of the testing job.
This was discussed at least there:
That would maybe be the way to go, with 3 chained jobs:
* First one is a wrapper and trigger 2 other jobs. It is executed on the
isotester the test job is supposed to be assigned to. It puts the
isotester in offline mode and starts the second job, blocking while
waiting for it to complete. This way this isotester is left reserved
for the second job, and the isotester name can be passed as a build
parameter to the second job. This job is low prio so it waits for
other second and third type of jobs to be completed before starting its
* The second job is executed on the master (which has two build
executors). This job ssh into the said isotester and issue the
reboot. It waits a bit and put the node back online again. This jobs
is higher prio so that it is not lagging behind other wrapper jobs in
* The third job is the test job, run on the freshly started isotester.
This one is high prio too to get executed before any other wrapper
Using some kind of queue sorting is necessary. Unfortunately, the
is not well supported by the current version of JJB in Debian. We'll
have to push upstream a fix, and meanwhile use the `raw` option trick in
the yaml files (which itself isn't supported by JJB in Debian yet,
hopefully the new one will leave the NEW queue soon).
Another tested but non-working option was to use the Jenkins [PostBuildScript
to issue a `shutdown -r` command at the end of the job. There are
indications that [people are using it like
already. It's supported by JJB.
There are several plugins that allow to chain jobs that we might use to
run the test suite job following a build job of a branch.
* Jenkins native way: it's very simple, and cannot take arguments.
That's what weasel
for his Tor CI stuff.
* [BuildPipeline plugin](https://wiki.jenkins-ci.org/display/JENKINS/Build+Pipeline+Plugin):
More a visualization tool, uses the native Jenkins way of triggering
a downstream job if one wants this trigger to be automatic.
* [ParameterizedTrigger plugin](https://wiki.jenkins-ci.org/display/JENKINS/Parameterized+Trigger+Plugin):
a more complete solution than the Jenkins Native way. Can pass
arguments from one job to another, using parameters to the call of
the downstream job, or taking them from a file from the upstream job.
The downstream job can also be manually triggered, and in this case
the parameters are entered through a form in the Web interface.
Note that the latest release as of 2015-09-01 (2.28) requires
Jenkins 1.580.1 ([[!tails_ticket 10068]])
* [MultiJob plugin](https://wiki.jenkins-ci.org/display/JENKINS/Multijob+Plugin):
Seems to be a complete solution too, build on the ParameterizedTrigger plugin and
the EnvInject one. Seems a bit less deployed than the
These are all supported by JJB v0.9+.
One solution that could work and won't require more additionnal plugins
to manage could be to make an extensive use of the EnvInject plugin in
the same way we already use it to configure the notification. Then we
would be able to simply use Jenkins' native way of chaining jobs:
* At the beginning of the build job, a script (in our jenkins-tools
repo) is collecting every necessary parameters defined in the
automated test blueprin and outputing them in a file in the
* This file is the one used by the build job, to setup the variables it
needs (currently only $NOTIFY_TO).
* At the end of the build job, this file is archived with the other
* At the beginning of the chained test job, this file is imported in
the workspace along with the build artifacts. The EnvInject pre-build
step uses it to setup the necessary variables.
Where I'm not sure is that the Jenkins's native way can collaborate
smoothly with the EnvInject plugin. Maybe the different steps we are
talking about don't happen in an order that would fit this scenario.
Might be that we'll have to use the ParameterizedTrigger plugin. Might
also be that we don't need the EnvInject plugin in the test job, but
just import the variables in the environment in the test suite wrapper
Passing parameters through jobs
We already specified what kind of informations we want to pass from the
build job to the test job.
The ParameterizedTiggerPlugin is the one usually used for that kind of
An other way that seem to be possible/used with the Jenkins native job
chaining ability is to put the wanted parameters in a file that is
archived with the artifacts of the upstream job. Then the downstream job
can be configured with then EnvInject plugin we already use to set the
necessary variables in the job environment.
Define which $OLD_ISO to test against
It appeared in [[!tails_ticket 10117]] that this question is not so
obvious and easy to address.
The most obvious answer would be to use the previous release for all the
branches **but** feature/jessie, which would use the previously built
ISO of the same branch.
But in some occasions, an ISO can't be tested this way, because it
contains changes that affects the "set up an old Tails", like changes in
the Persistence Assistant, the Greeter, the Tails Installer or in
So we may need a way to encode in the Git repo that a given branch needs
to use the same value than $ISO rather than the last release as $OLD_ISO.
We could use the same kind of trick than for the APT_overlay feature:
having a file in `config/ci.d/` that if present shows that this is the
case. OTOH, we may need something a bit more complex than a simple
boolean flag. So we may rather want to check the content of a file.
But this brings concerns about the merge of the base branch in the
feature branch and how to handle conflicts. Note that at testing time,
we'll have to merge the base branch before we look at that config
setting (because for some reason the base branch might itself require
old ISO = same).
As a first baby step, we will by default use the same ISO for both
`--old-iso` and `--iso`, except for the branches used to prepare
releases (`devel` and `stable`), so that we
know if the upgrades are broken long before the next release.
Another option that could be considered, using existing code in the repo: use the
`OLD_TAILS_ISO` flag present in `config/default.yml`: when we release we
set its value to the released ISO, and for some branch that need it we
empty this variable so that the test use the same ISO for both
`--old-iso` and `--iso`.
Retrieving the ISOs for the test
We'll need a way to retrieve the different ISO needed for the test.
For the ISO related to the upstream build job, this shouln't be a
problem with #9597. We can get it with either wget, or a python script
using python-jenkins. That was the point of this ticket.
For the last release ISO, we have several means:
* Using wget to get them from http://iso-history.tails.boum.org. This
website is password protected, but we could set up another private
vhost for the isotesters.
* Using the git-annex repo directly.
The former is probably the most simple to use.