path: root/vagrant
diff options
authorintrigeri <>2019-08-24 10:36:06 +0000
committerintrigeri <>2019-08-24 10:44:15 +0000
commitc448a322b7784b7b0fa428197a5cdc218753b75c (patch)
treedd0cc2b2f36f55ef4ca411a4fdde64cb5ed0a94c /vagrant
parentcdf270b4769b8d3910c6f01fb211c0ff06e92857 (diff)
Lower VM_MEMORY_BASE to 1536M.
The previous settings are a bit tough for machines that have 16GB of RAM or a bit less: for example, one of the machines in my local Jenkins setup, where I'm prototyping our next-generation CI hardware, has 32GB of RAM, and runs 2 Jenkins worker VMs, each having ~15GB of RAM. Since a few weeks I regularly see builds failing because there's not enough free memory to start the Vagrant build VM. Back in November 2018, the chroot for our Buster-based image was way bigger than when building Tails 3.x; with the "ram" build option, it therefore used more memory in the build tmpfs; and with the "noram" build option, quite possibly mksquashfs used more RAM as well. This probably explains why we had to bump VM_MEMORY_BASE to 2048MB. Since then, we've trimmed down our Buster-based image a lot, up to the point where it's now 55 MiB smaller than 3.15. So there's a chance we can revert to lower RAM requirements for the build VM. Let's try. This partly reverts commit 09233368a326ea739c3a1b6927c6cb59c4b10c5e.
Diffstat (limited to 'vagrant')
1 files changed, 1 insertions, 1 deletions
diff --git a/vagrant/lib/tails_build_settings.rb b/vagrant/lib/tails_build_settings.rb
index 333cf16..4c09ec5 100644
--- a/vagrant/lib/tails_build_settings.rb
+++ b/vagrant/lib/tails_build_settings.rb
@@ -7,7 +7,7 @@ VIRTUAL_MACHINE_HOSTNAME = 'vagrant-stretch'
# Approximate amount of RAM needed to run the builder's base system
# and perform a build
-VM_MEMORY_BASE = 2*1024
+VM_MEMORY_BASE = 1.5*1024
# Approximate amount of extra space needed for builds