summaryrefslogtreecommitdiffstats
path: root/wiki/src/blueprint/hardware_for_automated_tests_take3.mdwn
blob: 6ad5837f04477fdc9b58f039b19aa2199a01ab29 (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
This is about [[!tails_ticket 11680]] and related tickets.

[[!toc levels=3]]

# Rationale

In 2016 we gave our main server some more RAM, as a temporary solution
to cope with our workload, and as a way to learn about how to scale
it. See [[blueprint/hardware_for_automated_tests_take2]] for our
reasoning, lots of benchmark results, and conclusions.

It's working relatively well so far, but we may need to upgrade again
soonish, and improvements are always welcome in the contributor UX
area:

 * When Jenkins has built an ISO from one of our main branches or from
   a branch that is _Ready for QA_, since October 2017 we rebuild in
   in a slightly different build environment to ensure it can be
   [[rebuilt reproducibly|blueprint/reproducible_builds]].
   This increased substantially the number of ISO image we build
   which sometimes creates congestion in our CI pipeline (see below
   for details).
 * On our current setup, large numbers of automated test cases are
   brittle and had to be disabled on Jenkins (`@fragile`), which quite
   a few problematic consequences such as: it decreases the value our
   CI system brings to our development process, it is demotivating for
   test suite developers, it decreases the confidence developers have
   in our test suite, and it forces developers to run the full test
   suite elsewhere when they really want to validate a branch.
   Interestingly, we don't see this much brittleness anywhere else,
   even on a replica of our Jenkins setup that uses nested
   virtualization too.
 * As we add more automated tests, and re-enable tests previously
   flagged as fragile, a full test run takes longer and longer.
   We're now up to 230-255 minutes per run (depending on how many
   concurrent jobs are running) without fragile tests.
   We can't make it faster by adding RAM anymore nor by adding CPUs to ISO testers.
   But faster CPU cores would fix that. With a fast (Intel E-2134) CPU
   and a poor Internet connection, the same test suite (without fragile tests)
   only takes:
    - 138 minutes for 1 concurrent test suite run in a VM,
      i.e. using nested virtualization
    - 150-153 minutes for 2 concurrent test suite runs in VMs
 * Building our website takes a long while (11-15 minutes on our ISO
   builders on lizard, i.e. 20% of the entire ISO build time, which is
   54-70 minutes on lizard depending on how many concurrent jobs are running),
   which makes ISO builds take longer than they could. This will get worse as new
   languages are added to our website. This is a single-threaded task,
   so adding more CPU cores or RAM would not help: only faster CPU
   cores would fix that. For example, with a fast (Intel E-2134) CPU
   and a poor Internet connection, the ISO build only takes:
    - 25 minutes (including 5 minutes for building the website) on bare metal
    - 25 minutes for 1 concurrent build in a VM, i.e. using nested virtualization
    - 33 minutes (including ~5 minutes for building the website)
      for 2 concurrent builds in VMs, in the worst case situation (builds
      started exactly at the same time ⇒ they both need all their vCPUs
      at the same time)
 * Waiting time in queue for ISO build and test jobs is acceptable
   most of the time, but too high during peak load periods:

   - between 2017-06-17 and 2017-12-17:
     - 4% of the test jobs had to wait for more than 1 hour.
     - 1% of the test jobs had to wait for more than 2 hours.
     - 2% of the ISO build jobs had to wait more than 1 hour.

   - between 2018-05-01 and 2018-11-30:
     - We've run 3342 ISO test jobs; median duration: 195 minutes.
     - 7% of the test jobs had to wait for more than 1 hour.
     - 3% of the test jobs had to wait for more than 2 hours.
     - We've run 3355 ISO successful build jobs; median duration: 60 minutes.
     - 7.2% of the ISO build jobs had to wait more than 15 minutes.
     - 2% of the ISO build jobs had to wait more than 1 hour.
     - We've run 3355 `reproducibly_build_*` jobs; median duration: 70 minutes.
     - 10% of the `reproducibly_build_*` jobs had to wait more than 15 minutes.
     - 3.2% of the `reproducibly_build_*` jobs had to wait more than 1 hour.

   That's not many jobs of course, but this congestion happens
   precisely when we need results from our CI infra ASAP, be it
   because there's intense ongoing development or because we're
   reviewing and merging lots of branches close to a code freeze, so
   these delays hurt our development and release process.

 * Our current server was purchased at the end of 2014. The hardware
   can last quite a few more years, but we should plan (at least
   budget-wise) for replacing it when it is 5 years old, at the end of
   2019, to the latest.
 * The Tails community keeps needing new services; some of them need
   to be hosted on hardware we control for security/privacy reasons
   (which is not the case for our CI system):
   - Added already:
     - self-host our website
     - Schleuder
   - Will be added soon:
     - [[!tails_ticket 15919 desc="Redmine"]]
   - Under consideration:
     - [[!tails_ticket 14601 desc="Matomo"]] will require huge amounts
       of resources and put quite some load on the system where we run
       it; it needs to be hosted on hardware we control.
     - [[!tails_ticket 9960 desc="Request tracker for help desk"]]):
       unknown resources requirements; needs to be hosted on hardware
       we control.
   - WIP and will need more resources once they reach production
     status or are used more often:
     - Weblate is a serious CPU and I/O consumer; besides, part of its
       job is to rebuild the website, which is affected by the
       single-threaded task performance limitations described above.
     - survey platform
 * We're allowing more and more people to use our CI infrastructure.
   This may increase its resources needs eventually.

# Options

## Bare metal server dedicated to CI

Hard to tell whether this would fix our test suite fragility problems,
hard to specify what hardware we need. If we get it wrong, likely we
have to wait another 5 years before we try again ⇒ we need to rent
essentially the exact hardware we're looking at so we can benchmark it
before buying.

Pros:

 * No initial development nor skills to learn: we can run our test
   suite in exactly the same way as we currently do.
 * Can provide hardware redundancy in case lizard suddenly dies.
 * We control the hardware and have a good relationship with
   a friendly collocation.

Cons:

 * High initial money investment… unless we get it sponsored or get
   a big discount from a vendor.
 * On-going cost for hosting a second server.

Extra options:

 * If we want to drop nested virtualization to get more performance,
   then we have non-negligible development costs and hard sysadmin
   problems to solve ([[!tails_ticket 9486]]):
   - We currently _reboot_ isotesters between test suite runs ⇒ if we
     go this way we need to learn how to clean up after various kinds
     of test suite failure.
   - Our test suite currently assumes only one instance is running on
     a given system ⇒ if we go this way we have to remove
     this limitation.

Specs:

 * For simplicity's sake, the following assumes we have Jenkins
   workers that are each able to run all the kinds of jobs we have.
 * CPU: assuming 2 cores (4 hyperthreads) per Jenkins worker, for
   8 workers, we need 16 cores at 3.5 GHz base frequency, which is
   roughly equivalent to 4 × the Intel NUC mentioned below.
   Our options are:
    - 4 × [quad-core CPU](https://ark.intel.com/Search/FeatureFilter?productType=processors&CoreCountMin=4&ClockSpeedMhzMin=3500&StatusCodeId=4&MarketSegment=Server&ExtendedPageTables=true&VTD=true)
      (on 2018-12-01, one result: [Xeon Gold 5122](https://ark.intel.com/products/120475/Intel-Xeon-Gold-5122-Processor-16-5M-Cache-3-60-GHz-) → 4×105 W = 420 W)
    - 2 × [octo-core CPUs](https://ark.intel.com/Search/FeatureFilter?productType=processors&CoreCountMin=8&ClockSpeedMhzMin=3500&StatusCodeId=4&MarketSegment=Server&ExtendedPageTables=true&VTD=true)
      (on 2018-12-01, one result: [Xeon Gold 6144](https://ark.intel.com/products/124943/Intel-Xeon-Gold-6144-Processor-24-75M-Cache-3-50-GHz-) → 2×150 W = 300 W)
    - 1 × [16-core CPU](https://ark.intel.com/Search/FeatureFilter?productType=processors&CoreCountMin=16&ClockSpeedMhzMin=3500&StatusCodeId=4&MarketSegment=Server&ExtendedPageTables=true&VTD=true)
      (2018-12-01: no result)
    - Higher-density systems, with 2+ servers in a chassis e.g.
      Supermicro Twin solutions, might allow using cheaper CPUs that
      don't support multi-processor setups.
 * RAM:  
   + 208 GB = 26 GB × 8 Jenkins workers  
   + Jenkins VM + host system + a few accessory VMs  
   = round to 256 GB; 192 GB might feasible with super fast storage
   (at least 4 × NVMe × 2 for RAID-1) if that's cheaper
 * storage:  
   + 480 GB = 60 GB × 8 Jenkins workers  
   + 600 GB for the Jenkins artifacts store  
   +  70 GB for APT cacher (make it cache ISO history too)  
   + Jenkins VM + host system + a few accessory VMs  
   = round to 1.5 TB × 2 (RAID-1)

<a id="hacker-option"></a>

## Custom-built cluster of consumer-grade hardware dedicated to CI, aka. the hacker option

For example, we could stuff 4-6 × Intel NUC or similar
together in a custom case, with whatever cooling, PoE and network boot
system this high-density cluster would need. Each of these nodes
should be able to run 2 Jenkins workers.

### Pros and cons

Pros:

 * Potentially scalable: if there's room left we can add more nodes in
   the future.
 * As fast, or actually even faster, as server-grade hardware.
 * We have one such node to play with and benchmark results already.

Cons:

 * Lots of initial research and development: casing, cooling, hosting,
   power over Ethernet, network boot, remote administration
 * High initial money investment
 * Hosting this is a hard sell for collocations.
 * On-going cost for hosting this cluster.

### Availability

 * Intel: as of 2018-12-01, none of the eighth generation NUC8i7
   support vPro, so the fastest models with vPro remain those, that
   share
   a [i7-8650U](https://ark.intel.com/products/124968/Intel-Core-i7-8650U-Processor-8M-Cache-up-to-4-20-GHz-)
   CPU, UCFF (4"x4") form factor, and 12-24 VDC DC input voltage:
   - kit: [NUC7i7DNHE](https://ark.intel.com/products/130393/Intel-NUC-Kit-NUC7i7DNHE),
     [NUC7i7DNKE](https://ark.intel.com/products/130392/Intel-NUC-Kit-NUC7i7DNKE).
   - board: [NUC7i7DNBE](https://ark.intel.com/products/130394/Intel-NUC-Board-NUC7i7DNBE)
 * Other vendors have started selling UCFF boards/kits with fast CPUs.
 * Similar smallish [[!wikipedia Computer_form_factor desc="form
   factors"]] would be worth investigating, e.g. there are plenty of
   [[!wikipedia Mini-ITX]] options on the market that could give us
   the high density we need.
   - [Supermicro X11SCL-IF](https://www.supermicro.com/products/motherboard/X11/X11SCL-iF.cfm), $200
     + https://www.intel.com/content/www/us/en/products/processors/xeon/e-processors.html?Intel+Technology=900393 300€
     + 2 × 16GB RAM Unbuffered ECC UDIMM, DDR4-2666MHz (e.g. CT16G4WFD8266) $400
     + CPU heatsink SNK-P0049A4 $50
     + mini-ITX case with power supply (e.g. Antec ISK310-150) 95€
     + SSD M.2 PCI-E 3.0 x4 250GB 90€
   - ASRock mini-ITX motherboards with IPMI that support suitable CPUs:
     E3C226D2I, E3C236D4I-44E85, EPC612D4I
   - Supermicro X10SDV series are affordable but support only too slow CPUs
   - nicer Supermicro options, e.g. X11SSV-M4F, are too expensive (1000€ for the motherboard only)
* Supermicro SuperServer E300-9D is very tempting (IPMI) but more
  expensive than a NUC.

### Remote management

 * AMT (vPro) can be a pain as it shares the Ethernet interface
   with the OS. IPMI would be a big plus as it integrates well
   into the colo's existing setup.

<a id="hacker-option-benchmark"></a>

### Benchmarking results

Summary:

- Twice faster than lizard with 1 Jenkins executor on the node, that's
  able to run one build or test job at a time, without nested
  virtualization.

- With 2 Jenkins executors on the node, each in its own VM that's
  able to run one build or test job at a time, with nested virtualization:
  - light load, i.e. only one concurrent build/test: each build + test takes
    43% less time than on lizard
  - heavy load, i.e. two concurrent builds/tests: each build + test takes
    44% less time than on lizard

For detailed numbers, see above on this page: look for "E-2134".

So if we had, say, 4 such boxes in a case, each with 2 Jenkins
workers, in 24h they would build, reproduce and test ~55 branches.
While during the same period, the 9 VMs on lizard build, reproduce
and test ~40 branches. Even adding only 2 such boxes would
increase the maximum throughput of our CI by 69% and immensely lower
latency during heavy load times.

This is assuming perfect load distribution, which requires VMs that
can run both build and test jobs. It works fine in local limited
testing but this is not what we have set up on lizard at the moment:
there's a risk that a failed test job leaves the system in a bad shape
and breaks the following build job as we don't reboot before build
jobs. We might need to either make test jobs more robust on failure,
or to start rebooting VMs before build jobs as well.

## Run builds and/or tests in the cloud

<div class="caution">
Some of the following numbers are outdated in the sense that they don't
take into account that we'll build and test USB images as well soon,
which impacts mostly the amount of data that need to be 1. transferred
between the Jenkins master and workers; 2. stored on the Jenkins master.
</div>

EC2 does not support running KVM inside their VMs yet. Both Azure and
Google Cloud support it. OpenStack supports it too as long as the
cloud is run on KVM (e.g.
[OVH Public Cloud](https://www.openstack.org/marketplace/public-clouds/ovh-group/ovh-public-cloud/),
[OSU Open Source Lab](https://osuosl.org/services/hosting/details/)).
ProfitBricks would likely work too as their cloud is based on KVM.

There's a Jenkins plugin for every major cloud provider that allows
starting instances on demand when needed, up to pre-defined limits,
and shuts them down after a configurable idle time.

After building an ISO, we copy artifacts from the ISO builder to the
Jenkins master (and thus to nightly.t.b.o), and then from the Jenkins
master to another ISO builder (if the branch is _Ready for QA_ or one
of our main branches) and one ISO tester (in an case) that run
downstream jobs. These copies are blocking operations in our feedback
loop. So:

- If the network connection between pieces of our CI system was
  too slow, the performance benefits of building and testing
  faster may vanish.

  Assuming a 1.2 GB ISO, 3.5 minutes should be enough for a copy
  (based on benchmarking a download of a Debian ISO image from lizard)
  ⇒ 2 or 3 × 3.5 = 8 or 10.5 minutes for an ISO build; compared to 1.5
  minutes to the Jenkins master + 20 seconds to the 2nd ISO builder +
  15 seconds to the ISO tester =~ 2 minutes on lizard currently.
  In the worst case (Jenkins master on our infrastructure, Jenkins
  workers in the cloud) it adds 5 or 8.5 minutes to the feedback loop,
  which is certainly not negligible but is not a deal breaker either.

- If data transfers between pieces of our CI system costed money we
  would need estimate how much these copies would cost. On OVH Public
  Cloud, data transfers to/from the Internet are included in the price
  of the instance so let's ignore this.

One way to avoid this problem entirely is to run our Jenkins master
and nightly.t.b.o in the cloud as well.

Pros:

 * Scalable as much as we can (afford), both to react to varying
   workloads on the short term (some days we build and test tons of
   ISO images, some days a lot fewer), and to adjust to changing needs
   on the long term.
 * No initial money investment.
 * No hardware failures we have to deal with.
 * We can try various instances types until we find the right one, as
   opposed to bare metal that requires careful planning and
   somewhat-informed guesses (mistakes in this area can only be fixed
   years later: for example, choosing low-voltage CPUs, that are
   suboptimal for our workload).
 * Frees lots of resources on our current virtualization host, that
   can be reused for other purposes. And if we don't need these
   resources, then our next bare metal server can be much cheaper,
   both in terms of initial investment and on-going costs (it will
   suck less power).

Cons:

 * We need to learn how to manage systems in the cloud, how to deal
   with billing, and how to control these systems from Jenkins.

 * On-going cost: renting resources costs money every month.

   Very rough estimate, assuming we run all ISO builds and tests on
   dynamic OVH `C2-15` instances (4 vCores at 3.1 GHz, 15 GB RAM, 100
   GB SSD), assuming they perform exactly like my local Jenkins (4
   i7-6770HQ vCores at 2.60GHz, 15 GB RAM), and assuming that no VAT
   applies:

    - builds & tests:
      (30 minutes/build * 450 builds/month + 105 minutes/test * 350 tests/month)
      / 60 * 0.173€ = 145€/month
    - second build for reproducibility ([[!tails_ticket 13436]]):
      30 minutes / 60 * 250 builds/month * 0.173€ =  22€/month
    - total = 167€/month

   Now, to be more accurate:

    - Likely these instances will be faster than my local Jenkins,
      thanks to higher CPU clock rate, which should lower the actual
      costs; but only actual testing will give us more
      precise numbers.

    - Running a well chosen number of static instances would probably
      lower these costs thanks to the discount when paying per month.
      Also, booting a dynamic instance and configuring it takes some
      additional time, which costs money and decreases performance.

      We need to evaluate how many static instances (kept running at
      all times and paid per-month) we run and how many dynamic
      instances (spawned on demand and pay per-hour) we allow. E.g.
      on OVH public cloud, a dynamic C2-15 instance costs more than
      a static one once it runs more than 50% of the time. Thanks to
      the _Cluster Statistics_ Jenkins plugin, once we run this in the
      cloud we'll have the data we need to optimize this; it should be
      easy to script it so we can update these settings from time
      to time.

    - We need to add the cost of hosting our Jenkins master and
      nightly.t.b.o in the same cloud, or the cost of transferring
      build artifacts between that cloud and lizard.

      Our Jenkins master is currently allocated 2.6 GB of RAM and
      2 vcpus. An OVH S1-8 (13€/month) or B2-7 (22€/month) static
      instance should be enough. We estimated that 300 GB of storage
      would be enough at least until the end of 2018. Our metrics say
      that the storage volume that hosts Jenkins artifacts often makes
      good use of 1000-2000 IOPS, so an OVH "High Speed Volume"
      (0.08€/month/GB) would be better suited even though in practice
      only a small part of these 300 GB needs to be that fast, and
      possibly a slower "Classic Volume" might perform well enough.
      So:

       * worst case: 22 + 300×0.08 = 46€/month
       * best case: 13 + 300×0.04 = 25€/month

    - In theory we could keep running _some_ of our builds on our own
      infra instead of in the cloud: one option is that the cloud
      would only be used during peak load times for builds (but always
      used for tests in order to fix our test suite brittleness
      problems, hopefully). But if we do that, we don't improve ISO
      build performance in most cases, and the build artifacts copy
      problems surface, which costs performance and some development
      time to optimize things a bit:

       * If we run the Jenkins master on our own infra: only artifacts
         of ISO builds run in the cloud during peak load times need to
         be downloaded to lizard; we could force the 2nd ISO build to
         run locally so we avoid having to upload these artifacts to
         the cloud, or we could optimize the 2nd ISO build job to
         retrieve the 1st ISO only when the 2 ISOs differ and we need
         to run diffoscope on them (in which case we also need to
         download the 2nd ISO to archive it on the Jenkins master).
         But all ISO build artifacts must be uploaded to the nodes
         that run the test suite in the cloud.

       * If we run the Jenkins master in the cloud: most of the time
         we need to upload there the ISOs built on lizard; then we
         need to download them again for the 2nd build on lizard as
         well (unless we do something clever to keep them around for
         the 2nd build, or force the 2nd build to run in the cloud
         too); but at this point they're already available out there
         in the cloud for the test suite downstream job. During peak
         times, the difference is that ISOs built in the cloud are
         already there for everything else that follows (assuming we
         force the 2nd build to run in the cloud too).

 * We need to trust a third-party somewhat.

 * To make the whole thing more flexible and easier to manage, it
   would be good to have the same nodes able to run both builds and
   tests. Not sure what it would take and what the consequences
   would be.

We could request a grant from the cloud provider to experiment with
this approach.
See
[Arturo's report about how OONI took advantage of the AWS grant program](https://lists.torproject.org/pipermail/tor-project/2017-August/001391.html).

## Dismissed options

### Replace lizard

Dismissed: our CI workload has too specific needs that are better
served by dedicated hardware; trying to host everything on a single
box leads to crazy hardware specs that are hard to match.

Pros:

 * No initial development nor skills to learn: we can run our test
   suite in exactly the same way as we currently do.
 * On-going cost increases only slightly (we probably won't get
   low-voltage CPUs this time).
 * We can sell the current hardware while it's still current, and get
   some of our bucks back.

Cons:

 * High initial money investment.
 * Hard to tell whether this would fix our test suite fragility
   problems, and we'll only know after we've spent lots of money.
 * Hard to specify what hardware we need. If we get it wrong, likely
   we have to wait another 5 years before we try again.

<a id="plan"></a>

# The plan

1. <strike>Check how the sysadmins team feels about the cloud option: are
   there any blockers, for example wrt. ethics, security, privacy,
   anything else?</strike> → we're no big fans of using other people's
   computers but if that's the best option we can do it

2. Keep gathering data about our needs while going through the next steps:
   - upcoming services [intrigeri] (last updated: 2018-12-01)

3. Describe our needs for each option:
   - 2nd bare metal server-grade machine:
     * <strike>hardware specs [intrigeri]</strike> [DONE]
     * hosting needs (high power consumption) [intrigeri]
   - cloud (nested KVM, management tools in Debian), iff. the new
     sysadmin hired in 2019 knows is excited to learn and use
     such technologies.
   - the hacker option: hosting

5. Ask potential hardware/VMs donors (e.g. HPE, ProfitBricks) if they
   would happily satisfy our needs for free or with a big discount.
   If they can do that, then let's do it. Otherwise, keep reading.
   [intrigeri]

6. Benchmark how our workload would be handled by the options we have
   no data about yet:
   - Rent a bare metal server for a short time, run CI jobs in
     a realistic way, measure. [intrigeri]
   - Rent cloud VMs for a short time, run CI jobs in a realistic way,
     measure.

7. Decide what we do and look into the details e.g.:
   - bare metal options: where to host it?
   - cloud: refresh list of suitable providers, e.g. check if more
     providers offer nested KVM, if the OSU Open Source Lab offering
     is ready, and ask friendly potential cloud providers such as
     universities, HPC cluster admins, etc.