Diffstat (limited to 'wiki/src/blueprint/automated_builds_and_tests/jenkins.mdwn')
1 files changed, 102 insertions, 196 deletions
diff --git a/wiki/src/blueprint/automated_builds_and_tests/jenkins.mdwn b/wiki/src/blueprint/automated_builds_and_tests/jenkins.mdwn
index c53d05b..0e9902c 100644
@@ -1,152 +1,41 @@
+[[!meta title="Automated tests implementation details"]]
+For Jenkins resources, see [[blueprint/automated_builds_and_tests/resources]].
-- [Jenkins Best
- * [Git plugin](https://wiki.jenkins-ci.org/display/JENKINS/Git+Plugin)
- * [Copy Artifact
- can be used to run a test job against the result of a build job,
- e.g. for Debian packages (think Lintian) or Tails ISO images; see
- [grml's setup
- that uses it.
-- the [jenkins](http://jujucharms.com/charms/precise/jenkins) and
- JuJu charms may be good sources of inspiration for deployment
-- [[!cpan Net-Jenkins]] (not in Debian) allows to interact with
- a Jenkins server: create and start jobs, get information about
- builds etc.
-- [Job builder](http://ci.openstack.org/jenkins-job-builder/) provides
- one-way (Git to Jenkins) jobs synchronization; it's in Debian sid.
- * [configuration documentation](http://ci.openstack.org/jenkins-job-builder/configuration.html)
- * Debian uses it in their `update_jdn.sh`: it runs `jenkins-jobs
- update $config` after importing updated YAML job config files
- from Git.
- * Tor [use
- it](https://gitweb.torproject.org/project/jenkins/jobs.git/tree) too.
-- jenkins.debian.net uses the [SCM
- plugin, that apparently handles committing to the VCS on
- configuration changes done in the web interface, and maybe more.
-- [jenkins-yaml](https://github.com/varnish/jenkins-yaml) might make
- it easy to generate a large number of similar Jenkins jobs, e.g.
- one per branch
-- [jenkins_jobs puppet module](http://tradeshift.com/blog/tstech-managing-jenkins-job-configurations-by-puppet/)
-### Visible read-only on the web
-We'd like our Jenkins instance to be visible read-only on the web.
-We'd rather not rely on Jenkins authentication / authorization to
-enforce this read-only policy. We'd rather see the frontend reverse
-proxy take care of this.
-method should return the list of URL prefixes that we want to allow.
-And we could forbid anything else.
-The [Reverse Proxy
-Jenkins plugin can be useful to display [an example
-of this method.
-- [sample nginx configuration](https://wiki.jenkins-ci.org/display/JENKINS/Installing+Jenkins+on+Ubuntu)
-- [IRC plugin](https://wiki.jenkins-ci.org/display/JENKINS/IRC+Plugin),
- but I'm told that the jenkins email notifications are way nicer
- than what this plugin can do, so see [a better way to do
-- [[!cpan Jenkins-NotificationListener]] is a server that listens to
- messages from Jenkins [Notification
-### Notifying different people depending on what triggered the build
-At least the obvious candidate (Email-ext plugin) doesn't seem able to
-email different recipients depending on what triggered the build
-out-of-the-box. But apparently, one can set up two 'Script - After
-Build' email triggers in the Email-ext configuration: one emails the
-culprit, the other emails the RM. And then, they do something or not
-depending on a variable we set during the build, based on what
-triggered the build. Likely the cleaner and simpler solution.
-Otherwise, we could have Jenkins email some pipe script that will
-forward to the right person depending on 1. whether it's a base
-branch; and 2. whether the build was triggered by a push or by
-something else. This should work if we can get the email notification
-to pass the needed info in it. E.g. the full console output currently
-has "Started by timer" or "Started by an SCM change", but this is not
-part of the email notification. Could work, but a bit hackish and all
-kinds of things can go wrong.
-Also, I've seen lots of people documenting crazy similar things with
-some of these plugins: "Run Condition", "Conditional BuildStep",
-"Flexible Publish" and "Any Build step". But then it gets too
-complicated for me to dive into it right now.
-How others use Jenkins
- * [setup documentation](http://jenkins.debian.net/userContent/setup.html)
- * configuration: `git://git.debian.org/git/users/holger/jenkins.debian.net.git`
-- [Tor's jobs](https://gitweb.torproject.org/project/jenkins/jobs.git/blob/HEAD:/jobs.yaml)
-- [Ubuntu QA Jenkins instance](https://jenkins.qa.ubuntu.com/)
-- grml's Michael Prokop talks about autotesting in KVM during his
- [talk at DebConf
- they use Jenkins:
- * [Jenkins instance](http://jenkins.grml.org/)
- * [unittests](https://github.com/grml/grml-unittests)
- * [debian-glue Jenkins plugin](https://github.com/mika/jenkins-debian-glue)
- * [kantan](https://github.com/mika/kantan): simple test suite for
- autotesting using Grml and KVM
- * [Jenkins server setup documentation](https://github.com/grml/grml-server-setup/blob/master/jenkins.asciidoc)
- has the tools Lars Wirzenius uses to manage his CI (Python projects
- test suite, Debian packages, importing into reprepro, VM setup of
- all needed stuff); the whole thing is very ad-hoc but many bits
- could be used as inspiration sources.
-Jenkins for Perl projects
-* [a collection of links](https://wiki.jenkins-ci.org/display/JENKINS/Perl+Projects)
- on the Jenkins wiki
-* an overview of the available tools: [[!cpan Task::Jenkins]]
-* [a tutorial](https://logiclab.jira.com/wiki/display/OPEN/Continuous+Integration)
-* [another tutorial](http://alexandre-masselot.blogspot.com/2011/12/perl-hudson-continuous-testing.html)
-* use [[!cpan TAP::Formatter::JUnit]] (in Wheezy) rather than the Jenkins TAP plugin
-* use `prove --timer` to know how long each test takes
+We use code that lay in three different Git repositories to generate
+automatically the list of Jenkins jobs for branches that are active in
+the Tails main Git repo.
+The first brick is the Tails
+[[!tails_gitweb_repo pythonlib]], which extracts the list of
+active branches and output the needed informations. This list is parsed
+by the `generate_tails_iso_jobs` script run by a cronjob and deployed by
+our [[!tails_gitweb_repo puppet-tails]]
+This script output yaml files compatible with
+It creates one `project` for each active branches, which in turn uses
+three JJB `job templates` to create the three jobs for each branch: the
+ISO build one, and wrapper job that is used to start the ISO test jobs.
+This changes are pushed to our [[!tails_gitweb_repo jenkins-jobs]] git
+repo by the cronjob, and thanks to their automatic deployment in our
+`tails::jenkins::master` and `tails::gitolite::hooks::jenkins_jobs`
+manifests in our [[!tails_gitweb_repo puppet-tails]] repo, this new
+changes are applied automatically to our Jenkins instance.
Restarting slave VMs between jobs
This question is tracked in [[!tails_ticket 9486]].
-When we tackle [[!tails_ticket 5288]], if the test suite doesn't
+For [[!tails_ticket 5288]] to be robust enough, if the test suite doesn't
_always_ clean between itself properly (e.g. when tests simply hang
and timeout), we might want to restart `isotesterN.lizard` between
each each ISO testing job.
@@ -164,44 +53,35 @@ This was discussed at least there:
-That would maybe be the way to go, with 3 chained jobs:
+We achieve this VM reboot by using 3 chained jobs:
* First one is a wrapper and trigger 2 other jobs. It is executed on the
isotester the test job is supposed to be assigned to. It puts the
isotester in offline mode and starts the second job, blocking while
waiting for it to complete. This way this isotester is left reserved
- for the second job, and the isotester name can be passed as a build
+ while the second job run, and the isotester name can be passed as a build
parameter to the second job. This job is low prio so it waits for
other second and third type of jobs to be completed before starting its
-* The second job is executed on the master (which has two build
+* The second job is executed on the master (which has 4 build
executors). This job ssh into the said isotester and issue the
- reboot. It waits a bit and put the node back online again. This jobs
- is higher prio so that it is not lagging behind other wrapper jobs in
- the queue.
+ reboot. It needs to wait a reasonable amount of time for the Jenkins
+ slave to be stopped by the shutdown process so that no jobs gets assigned
+ to this isotester meanwhile. Stoping this Jenkins slave daemon usually
+ takes a few seconds. During testing, 5 seconds proved to be enough of
+ a delay for that, and more would mean unnecessary lagging time. It then
+ put the node back online again. This job is higher prio so that it is
+ not lagging behind other wrapper jobs in the queue.
* The third job is the test job, run on the freshly started isotester.
This one is high prio too to get executed before any other wrapper
-Using some kind of queue sorting is necessary. Unfortunately, the
-is not well supported by the current version of JJB in Debian. We'll
-have to push upstream a fix, and meanwhile use the `raw` option trick in
-the yaml files (which itself isn't supported by JJB in Debian yet,
-hopefully the new one will leave the NEW queue soon).
-Another tested but non-working option was to use the Jenkins [PostBuildScript
-to issue a `shutdown -r` command at the end of the job. There are
-indications that [people are using it like
-already. It's supported by JJB.
+ jobs. These jobs are set to run concurrently, so that if a first one is
+ already running, a more recent one triggered by a new build will still
+ be able to run and not be blocked by the first running one.
There are several plugins that allow to chain jobs that we might use to
run the test suite job following a build job of a branch.
@@ -228,33 +108,11 @@ run the test suite job following a build job of a branch.
These are all supported by JJB v0.9+.
-One solution that could work and won't require more additionnal plugins
-to manage could be to make an extensive use of the EnvInject plugin in
-the same way we already use it to configure the notification. Then we
-would be able to simply use Jenkins' native way of chaining jobs:
- * At the beginning of the build job, a script (in our jenkins-tools
- repo) is collecting every necessary parameters defined in the
- automated test blueprin and outputing them in a file in the
- /build-artifacts/ directory.
- * This file is the one used by the build job, to setup the variables it
- needs (currently only $NOTIFY_TO).
- * At the end of the build job, this file is archived with the other
- * At the beginning of the chained test job, this file is imported in
- the workspace along with the build artifacts. The EnvInject pre-build
- step uses it to setup the necessary variables.
-Where I'm not sure is that the Jenkins's native way can collaborate
-smoothly with the EnvInject plugin. Maybe the different steps we are
-talking about don't happen in an order that would fit this scenario.
-Might be that we'll have to use the ParameterizedTrigger plugin. Might
-also be that we don't need the EnvInject plugin in the test job, but
-just import the variables in the environment in the test suite wrapper
+As we'll have to pass some parameters, the ParameterizedTrigger plugin
+is the best candidate for us.
Passing parameters through jobs
We already specified what kind of informations we want to pass from the
build job to the test job.
@@ -262,14 +120,62 @@ build job to the test job.
The ParameterizedTiggerPlugin is the one usually used for that kind of
-An other way that seem to be possible/used with the Jenkins native job
-chaining ability is to put the wanted parameters in a file that is
-archived with the artifacts of the upstream job. Then the downstream job
-can be configured with then EnvInject plugin we already use to set the
-necessary variables in the job environment.
+We'll use it for some basic parameter passing through jobs, but given
+the test jobs will need to know a lot of them from the build job, we'll
+also use the EnvInject plugin we're already using:
+ * In the build job, a script will collect every necessary parameters
+ defined in the automated test blueprint and outputing them in a file
+ in the /build-artifacts/ directory.
+ * This file is the one used by the build job, to setup the variables it
+ needs (currently only $NOTIFY_TO).
+ * At the end of the build job, this file is archived with the other
+ * At the beginning of the chained test job, this file is imported in
+ the workspace along with the build artifacts. The EnvInject pre-build
+ step uses it to setup the necessary variables.
+Define which $OLD_ISO to test against
+It appeared in [[!tails_ticket 10117]] that this question is not so
+obvious and easy to address.
+The most obvious answer would be to use the previous release for all the
+branches **but** feature/jessie, which would use the previously built
+ISO of the same branch.
+But in some occasions, an ISO can't be tested this way, because it
+contains changes that affects the "set up an old Tails", like changes in
+the Persistence Assistant, the Greeter, the Tails Installer or in
+So we may need a way to encode in the Git repo that a given branch needs
+to use the same value than $ISO rather than the last release as $OLD_ISO.
+We could use the same kind of trick than for the APT_overlay feature:
+having a file in `config/ci.d/` that if present shows that this is the
+case. OTOH, we may need something a bit more complex than a simple
+boolean flag. So we may rather want to check the content of a file.
+But this brings concerns about the merge of the base branch in the
+feature branch and how to handle conflicts. Note that at testing time,
+we'll have to merge the base branch before we look at that config
+setting (because for some reason the base branch might itself require
+old ISO = same).
+Another option that could be considered, using existing code in the repo: use the
+`OLD_TAILS_ISO` flag present in `config/default.yml`: when we release we
+set its value to the released ISO, and for some branch that need it we
+empty this variable so that the test use the same ISO for both
+`--old-iso` and `--iso`.
+In the end, we will by default use the same ISO for both `--old-iso` and
+`--iso`, except for the branches used to prepare releases (`devel` and
+`stable`), so that we know if the upgrades are broken long before the
Retrieving the ISOs for the test
We'll need a way to retrieve the different ISO needed for the test.
@@ -284,4 +190,4 @@ For the last release ISO, we have several means:
vhost for the isotesters.
* Using the git-annex repo directly.
-The former is probably the most simple to use.
+We'll use the first one, as it's easier to implement.