summaryrefslogtreecommitdiffstats
path: root/wiki/src/blueprint/automated_builds_and_tests/jenkins.mdwn
blob: 1d0eb88e01335a9243881c96f5bf426dcd898901 (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
[[!meta title="Jenkins"]]

[[!toc levels=2]]

Resources
=========

Miscellaneous
-------------

- [Jenkins Best
  Practices](https://wiki.jenkins-ci.org/display/JENKINS/Jenkins+Best+Practices)
- [plugins](https://wiki.jenkins-ci.org/display/JENKINS/Plugins)
  * [Git plugin](https://wiki.jenkins-ci.org/display/JENKINS/Git+Plugin)
  * [Copy Artifact
    plugin](https://wiki.jenkins-ci.org/display/JENKINS/Copy+Artifact+Plugin)
    can be used to run a test job against the result of a build job,
    e.g. for Debian packages (think Lintian) or Tails ISO images; see
    [grml's setup
    documentation](http://jenkins-debian-glue.org/getting_started/manual/)
    that uses it.
- the [jenkins](http://jujucharms.com/charms/precise/jenkins) and
  [jenkins-slave](http://jujucharms.com/charms/precise/jenkins-slave)
  JuJu charms may be good sources of inspiration for deployment
- [[!cpan Net-Jenkins]] (not in Debian) allows to interact with
  a Jenkins server: create and start jobs, get information about
  builds etc.

Jobs management
---------------

- [Job builder](http://ci.openstack.org/jenkins-job-builder/) provides
  one-way (Git to Jenkins) jobs synchronization; it's in Debian sid.
  * [configuration documentation](http://ci.openstack.org/jenkins-job-builder/configuration.html)
  * Debian uses it in their `update_jdn.sh`: it runs `jenkins-jobs
    update $config` after importing updated YAML job config files
    from Git.
  * Tor [use
    it](https://gitweb.torproject.org/project/jenkins/jobs.git/tree) too.
- jenkins.debian.net uses the [SCM
  Sync](https://wiki.jenkins-ci.org/display/JENKINS/SCM+Sync+configuration+plugin)
  plugin, that apparently handles committing to the VCS on
  configuration changes done in the web interface, and maybe more.
- [jenkins-yaml](https://github.com/varnish/jenkins-yaml) might make
  it easy to generate a large number of similar Jenkins jobs, e.g.
  one per branch
- [jenkins_jobs puppet module](http://tradeshift.com/blog/tstech-managing-jenkins-job-configurations-by-puppet/)

Web setup
---------

### Visible read-only on the web

We'd like our Jenkins instance to be visible read-only on the web.
We'd rather not rely on Jenkins authentication / authorization to
enforce this read-only policy. We'd rather see the frontend reverse
proxy take care of this.

The
[`getUnprotectedRootActions()`](http://javadoc.jenkins-ci.org/jenkins/model/Jenkins.html#getUnprotectedRootActions())
method should return the list of URL prefixes that we want to allow.
And we could forbid anything else.

The [Reverse Proxy
Auth](https://wiki.jenkins-ci.org/display/JENKINS/Reverse+Proxy+Auth+Plugin)
Jenkins plugin can be useful to display [an example
usage](https://github.com/jenkinsci/reverse-proxy-auth-plugin/commit/72567a974960be2363107614ba3f705ec6e9b695)
of this method.

### Miscellaneous

- [sample nginx configuration](https://wiki.jenkins-ci.org/display/JENKINS/Installing+Jenkins+on+Ubuntu)

Notifications
-------------

- [IRC plugin](https://wiki.jenkins-ci.org/display/JENKINS/IRC+Plugin),
  but I'm told that the jenkins email notifications are way nicer
  than what this plugin can do, so see [a better way to do
  it](http://jenkins.debian.net/userContent/setup.html#_installing_kgb_client)
- [[!cpan Jenkins-NotificationListener]] is a server that listens to
  messages from Jenkins [Notification
  plugin](https://wiki.jenkins-ci.org/display/JENKINS/Notification+Plugin).

### Notifying different people depending on what triggered the build

At least the obvious candidate (Email-ext plugin) doesn't seem able to
email different recipients depending on what triggered the build
out-of-the-box. But apparently, one can set up two 'Script - After
Build' email triggers in the Email-ext configuration: one emails the
culprit, the other emails the RM. And then, they do something or not
depending on a variable we set during the build, based on what
triggered the build. Likely the cleaner and simpler solution.

Otherwise, we could have Jenkins email some pipe script that will
forward to the right person depending on 1. whether it's a base
branch; and 2. whether the build was triggered by a push or by
something else. This should work if we can get the email notification
to pass the needed info in it. E.g. the full console output currently
has "Started by timer" or "Started by an SCM change", but this is not
part of the email notification. Could work, but a bit hackish and all
kinds of things can go wrong.

Also, I've seen lots of people documenting crazy similar things with
some of these plugins: "Run Condition", "Conditional BuildStep",
"Flexible Publish" and "Any Build step". But then it gets too
complicated for me to dive into it right now.

How others use Jenkins
----------------------

-  jenkins.debian.net's:
   * [setup documentation](http://jenkins.debian.net/userContent/setup.html)
   * configuration: `git://git.debian.org/git/users/holger/jenkins.debian.net.git`
- [Tor's jobs](https://gitweb.torproject.org/project/jenkins/jobs.git/blob/HEAD:/jobs.yaml)
- [Ubuntu QA Jenkins instance](https://jenkins.qa.ubuntu.com/)
- grml's Michael Prokop talks about autotesting in KVM during his
  [talk at DebConf
  10](http://penta.debconf.org/dc10_schedule/events/547.en.html);
  they use Jenkins:
  * [Jenkins instance](http://jenkins.grml.org/)
  * [unittests](https://github.com/grml/grml-unittests)
  * [debian-glue Jenkins plugin](https://github.com/mika/jenkins-debian-glue)
  * [kantan](https://github.com/mika/kantan): simple test suite for
    autotesting using Grml and KVM
  * [Jenkins server setup documentation](https://github.com/grml/grml-server-setup/blob/master/jenkins.asciidoc)
- [jenkinstool](http://git.gitano.org.uk/personal/liw/jenkinstool.git/)
  has the tools Lars Wirzenius uses to manage his CI (Python projects
  test suite, Debian packages, importing into reprepro, VM setup of
  all needed stuff); the whole thing is very ad-hoc but many bits
  could be used as inspiration sources.

Jenkins for Perl projects
-------------------------

* [a collection of links](https://wiki.jenkins-ci.org/display/JENKINS/Perl+Projects)
  on the Jenkins wiki
* an overview of the available tools: [[!cpan Task::Jenkins]]
* [a tutorial](https://logiclab.jira.com/wiki/display/OPEN/Continuous+Integration)
* [another tutorial](http://alexandre-masselot.blogspot.com/2011/12/perl-hudson-continuous-testing.html)
* use [[!cpan TAP::Formatter::JUnit]] (in Wheezy) rather than the Jenkins TAP plugin
* use `prove --timer` to know how long each test takes

Restarting slave VMs between jobs
---------------------------------

This question is tracked in [[!tails_ticket 9486]].

When we tackle [[!tails_ticket 5288]], if the test suite doesn't
_always_ clean between itself properly (e.g. when tests simply hang
and timeout), we might want to restart `isotesterN.lizard` between
each each ISO testing job.

If such VMs are Jenkins slave, then we can't do it as part of the job
itself, but workarounds are possible, such as having a job restart and
wait for the VM, that triggers another job that actually starts the
tests. Or, instead of running `jenkins-slave` on those VMs, running
one instance thereof somewhere else (in a Docker container on
`jenkins.lizard`?) and then have "restart the testing VM and wait for
it to come up" be part of the testing job.

This was discussed at least there:

* <http://jenkins-ci.361315.n4.nabble.com/How-to-reboot-a-slave-during-a-build-td4628820.html>
* <https://stackoverflow.com/questions/5543413/reconfigure-and-reboot-a-hudson-jenkins-slave-as-part-of-a-build>

That would maybe be the way to go, with 3 chained jobs:

* First one is a wrapper and trigger 2 other jobs. It is executed on the
  isotester the test job is supposed to be assigned to. It puts the
  isotester in offline mode and starts the second job, blocking while
  waiting for it to complete. This way this isotester is left reserved
  for the second job, and the isotester name can be passed as a build
  parameter to the second job. This job is low prio so it waits for
  other second and third type of jobs to be completed before starting its
  own.
* The second job is executed on the master (which has two build
  executors). This job ssh into the said isotester and issue the
  reboot. It waits a bit and put the node back online again. This jobs
  is higher prio so that it is not lagging behind other wrapper jobs in
  the queue.
* The third job is the test job, run on the freshly started isotester.
  This one is high prio too to get executed before any other wrapper
  jobs.

Using some kind of queue sorting is necessary. Unfortunately, the
[PrioritySorter
plugin](https://wiki.jenkins-ci.org/display/JENKINS/Priority+Sorter+Plugin)
is not well supported by the current version of JJB in Debian. We'll
have to push upstream a fix, and meanwhile use the `raw` option trick in
the yaml files (which itself isn't supported by JJB in Debian yet,
hopefully the new one will leave the NEW queue soon).

Another tested but non-working option was to use the Jenkins [PostBuildScript
plugin](https://wiki.jenkins-ci.org/display/JENKINS/PostBuildScript%20Plugin)
to issue a `shutdown -r` command at the end of the job. There are
indications that [people are using it like
this](https://stackoverflow.com/questions/11160363/execute-shell-script-after-post-build-in-jenkins)
already. It's supported by JJB.

<a id="chain"></a>

Chaining jobs
-------------

There are several plugins that allow to chain jobs that we might use to
run the test suite job following a build job of a branch.

 * Jenkins native way: it's very simple, and cannot take arguments.
   That's what weasel
   [uses](https://gitweb.torproject.org/project/jenkins/jobs.git/tree/jobs.yaml)
   for his Tor CI stuff.
 * [BuildPipeline plugin](https://wiki.jenkins-ci.org/display/JENKINS/Build+Pipeline+Plugin):
   More a visualization tool, uses the native Jenkins way of triggering
   a downstream job if one wants this trigger to be automatic.
 * [ParameterizedTrigger plugin](https://wiki.jenkins-ci.org/display/JENKINS/Parameterized+Trigger+Plugin):
   a more complete solution than the Jenkins Native way. Can pass
   arguments from one job to another, using parameters to the call of
   the downstream job, or taking them from a file from the upstream job.
   The downstream job can also be manually triggered, and in this case
   the parameters are entered through a form in the Web interface.
   Note that the latest release as of 2015-09-01 (2.28) requires
   Jenkins 1.580.1 ([[!tails_ticket 10068]])
 * [MultiJob plugin](https://wiki.jenkins-ci.org/display/JENKINS/Multijob+Plugin):
   Seems to be a complete solution too, build on the ParameterizedTrigger plugin and
   the EnvInject one. Seems a bit less deployed than the
   ParameterizedTrigger plugin.

These are all supported by JJB v0.9+.

One solution that could work and won't require more additionnal plugins
to manage could be to make an extensive use of the EnvInject plugin in
the same way we already use it to configure the notification. Then we
would be able to simply use Jenkins' native way of chaining jobs:

 * At the beginning of the build job, a script (in our jenkins-tools
   repo) is collecting every necessary parameters defined in the
   automated test blueprin and outputing them in a file in the
   /build-artifacts/ directory.
 * This file is the one used by the build job, to setup the variables it
   needs (currently only $NOTIFY_TO).
 * At the end of the build job, this file is archived with the other
   artifacts.
 * At the beginning of the chained test job, this file is imported in
   the workspace along with the build artifacts. The EnvInject pre-build
   step uses it to setup the necessary variables.

Where I'm not sure is that the Jenkins's native way can collaborate
smoothly with the EnvInject plugin. Maybe the different steps we are
talking about don't happen in an order that would fit this scenario.
Might be that we'll have to use the ParameterizedTrigger plugin. Might
also be that we don't need the EnvInject plugin in the test job, but
just import the variables in the environment in the test suite wrapper
script.

Passing parameters through jobs
-------------------------------

We already specified what kind of informations we want to pass from the
build job to the test job.

The ParameterizedTiggerPlugin is the one usually used for that kind of
work.

An other way that seem to be possible/used with the Jenkins native job
chaining ability is to put the wanted parameters in a file that is
archived with the artifacts of the upstream job. Then the downstream job
can be configured with then EnvInject plugin we already use to set the
necessary variables in the job environment.

Define which $OLD_ISO to test against
-------------------------------------

It appeared in [[!tails_ticket 10117]] that this question is not so
obvious and easy to address.

The most obvious answer would be to use the previous release for all the
branches **but** feature/jessie, which would use the previously built
ISO of the same branch.

But in some occasions, an ISO can't be tested this way, because it
contains changes that affects the "set up an old Tails", like changes in
the Persistence Assistant, the Greeter, the Tails Installer or in
syslinux.

So we may need a way to encode in the Git repo that a given branch needs
to use the same value than $ISO rather than the last release as $OLD_ISO.
We could use the same kind of trick than for the APT_overlay feature:
having a file in `config/ci.d/` that if present shows that this is the
case. OTOH, we may need something a bit more complex than a simple
boolean flag. So we may rather want to check the content of a file.

But this brings concerns about the merge of the base branch in the
feature branch and how to handle conflicts. Note that at testing time,
we'll have to merge the base branch before we look at that config
setting (because for some reason the base branch might itself require
old ISO = same).

As a first baby step, we will by default use the same ISO for both
`--old-iso` and `--iso`, except for the branches used to prepare
releases (`devel` and `stable`), so that we
know if the upgrades are broken long before the next release.

Another option that could be considered, using existing code in the repo: use the
`OLD_TAILS_ISO` flag present in `config/default.yml`: when we release we
set its value to the released ISO, and for some branch that need it we
empty this variable so that the test use the same ISO for both
`--old-iso` and `--iso`.

Retrieving the ISOs for the test
--------------------------------

We'll need a way to retrieve the different ISO needed for the test.

For the ISO related to the upstream build job, this shouln't be a
problem with #9597. We can get it with either wget, or a python script
using python-jenkins. That was the point of this ticket.

For the last release ISO, we have several means:

* Using wget to get them from http://iso-history.tails.boum.org. This
  website is password protected, but we could set up another private
  vhost for the isotesters.
* Using the git-annex repo directly.

The former is probably the most simple to use.