[Qa-jenkins-scm] [Git][qa/jenkins.debian.net][master] reproducible Debian: retire machines with only 2GB of ram.
Vagrant Cascadian (@vagrant)
gitlab at salsa.debian.org
Wed May 26 20:51:34 BST 2021
Vagrant Cascadian pushed to branch master at Debian QA / jenkins.debian.net
Commits:
dbc29d39 by Vagrant Cascadian at 2021-05-26T12:50:59-07:00
reproducible Debian: retire machines with only 2GB of ram.
- - - - -
3 changed files:
- README
- THANKS.head
- bin/reproducible_build_service.sh
Changes:
=====================================
README
=====================================
@@ -118,7 +118,7 @@ Installation tests inside chroot environments.
* The (current) purpose of https://tests.reproducible-builds.org is to show the potential of reproducible builds for Debian - and six other projects currently. This is research, showing what could (and should) be done... check https://wiki.debian.org/ReproducibleBuilds for the real status of the project for Debian!
-* For Debian, five suites, 'stretch', 'buster', 'bullseye', 'unstable' and 'experimental', are tested on four architectures: 'amd64', 'i386', 'arm64' and 'armhf'. The tests are done using 'pbuilder' through several concurrent workers: 40 for 'amd64', 24 for 'i386', 32 for 'arm64' and 63 for 'armhf', which are each constantly testing packages and saving the results of these tests. There's a single link:https://salsa.debian.org/qa/jenkins.debian.net/blob/master/bin/reproducible_build_service.sh[systemd service] starting all of these link:https://salsa.debian.org/qa/jenkins.debian.net/blob/master/bin/reproducible_worker.sh[workers] which in turn launch the actual link:https://salsa.debian.org/qa/jenkins.debian.net/blob/master/bin/reproducible_build.sh[build script]. (So the actual builds and tests are happening outside the jenkins service.)
+* For Debian, five suites, 'stretch', 'buster', 'bullseye', 'unstable' and 'experimental', are tested on four architectures: 'amd64', 'i386', 'arm64' and 'armhf'. The tests are done using 'pbuilder' through several concurrent workers: 40 for 'amd64', 24 for 'i386', 32 for 'arm64' and 36 for 'armhf', which are each constantly testing packages and saving the results of these tests. There's a single link:https://salsa.debian.org/qa/jenkins.debian.net/blob/master/bin/reproducible_build_service.sh[systemd service] starting all of these link:https://salsa.debian.org/qa/jenkins.debian.net/blob/master/bin/reproducible_worker.sh[workers] which in turn launch the actual link:https://salsa.debian.org/qa/jenkins.debian.net/blob/master/bin/reproducible_build.sh[build script]. (So the actual builds and tests are happening outside the jenkins service.)
** To shutdown all the workers use: `sudo systemctl stop reproducible_build at startup.service ; /srv/jenkins/bin/reproducible_cleanup_nodes.sh`
** To start all the workers use: `sudo systemctl start reproducible_build at startup.service`
@@ -126,14 +126,11 @@ Installation tests inside chroot environments.
** for 'amd64' we are using four virtual machines, ionos(1+5+11+15)-amd64, which have 15 or 16 cores and 48gb ram each. These nodes are sponsored by link:https://jenkins.debian.net/userContent/thanks.html[IONOS].
** for 'i386' we are also using four virtual machines, ionos(2+6+12+16)-i386, which have 10 or 9 cores and 36gb ram each. ionos2+12 run emulated AMD Opteron CPUs and ionos6+16 Intel Xeon CPUs. These nodes are also sponsored by link:https://jenkins.debian.net/userContent/thanks.html[IONOS].
** for 'arm64' we are using eight "moonshot" sleds, codethink9-15-arm64, which have 8 cores and 64gb ram each. These nodes are sponsored by link:https://jenkins.debian.net/userContent/thanks.html[Codethink].
-** To test 'armhf' we are using 28 small boards hosted by vagrant at reproducible-builds.org:
+** To test 'armhf' we are using 13 small boards hosted by vagrant at reproducible-builds.org:
*** six quad-cores (cbxi4a, cbxi4b, ff4a, jtx1a, jtx1b, jtx1c) with 4gb ram,
*** one hexa-core (ff64a) with 4gb ram,
*** two quad-core (virt32a, virt64a) with 15gb ram,
*** four quad-core (virt32b, virt32c, virt64b, virt64c) with 7gb ram,
-*** two octo-cores (odxu4a, odxu4b) with 2gb ram,
-*** eleven quad-cores (wbq0, cbxi4pro0, ff2a, ff2b, odu3a, opi2a, opi2c, jtk1a, jtk1b, p64b and p64c) with 2gb ram, and
-*** two dual-core (bbx15 and cb3a) with 2gb ram each.
* We would love to have more or more powerful ARM hardware in the future, if you can help, please talk to us!
* Packages to be build are scheduled in the database via a scheduler job, which runs every hour and if the queue is below a certain threshold schedules four types of packages:
=====================================
THANKS.head
=====================================
@@ -22,12 +22,8 @@ link:https://jenkins.debian.net/["jenkins.debian.net"] would not be possible wit
** 5 cores and 10 GB memory for ionos9-amd64.debian.net used for rebootstrap jobs
** 4 cores and 12 GB memory for ionos10-amd64.debian.net used for chroot-installation jobs
** 9 cores and 19 GB memory for freebsd-jenkins.debian.net (also running on IONOS virtual hardware), used for building FreeBSD for t.r-b.o
- * link:https://qa.debian.org/developer.php?login=vagrant%40debian.org[Vagrant] provides and hosts 27 'armhf' systems, used for building armhf Debian packages for t.r-b.o:
+ * link:https://qa.debian.org/developer.php?login=vagrant%40debian.org[Vagrant] provides and hosts 13 'armhf' systems, used for building armhf Debian packages for t.r-b.o:
** four quad-cores with 4 GB RAM each,
- ** three octo-cores with 2 GB RAM each,
- ** one hexa-core with 2 GB RAM,
- ** twelve quad-cores with 2 GB RAM each,
- ** two dual-core with 2 GB RAM,
* servers provided by linaro for 'armhf':
** one octo-core with 32GB of ram (divided into two virtual machines running at ~15GB ram each)
** two octo-core with 16GB of ram (divided into four virtual machines running at ~7GB ram each)
=====================================
bin/reproducible_build_service.sh
=====================================
@@ -121,80 +121,49 @@ choose_nodes() {
arm64_31) NODE1=codethink16-arm64 NODE2=codethink13-arm64 ;;
arm64_32) NODE1=codethink16-arm64 NODE2=codethink15-arm64 ;;
# to choose new armhf jobs:
- # for i in cb3a bbx15 cbxi4pro0 ff2a ff2b ff64a jtk1a jtk1b odxu4a odxu4b odu3a opi2a opi2c p64b p64c wbq0 cbxi4a cbxi4b ff4a jtx1a jtx1b jtx1c virt32a virt32b virt32c virt64a virt64b virt64c; do echo "$i: " ; grep NODE1 bin/reproducible_build_service.sh|grep armhf|grep $i-armhf ; done
+ # for i in ff64a cbxi4a cbxi4b ff4a jtx1a jtx1b jtx1c virt32a virt32b virt32c virt64a virt64b virt64c; do echo "$i: " ; grep NODE1 bin/reproducible_build_service.sh|grep armhf|grep $i-armhf ; done
# 6-8 jobs for quad-cores with 15 gb ram
# 6-7 jobs for quad-cores with 7 gb ram
# 6 jobs for quad-cores with 4 gb ram
- # 4 jobs for octo-cores with 2 gb ram
- # 4 jobs for hexa-cores with 2 gb ram
- # 4 jobs for quad-cores with 2 gb ram
- # 4 jobs for dual-cores with 2 gb ram
#
# Don't forget to update README with the number of builders…!
#
- armhf_1) NODE1=bbx15-armhf-rb NODE2=jtx1a-armhf-rb ;;
- armhf_2) NODE1=bbx15-armhf-rb NODE2=virt64c-armhf-rb ;;
- armhf_3) NODE1=cb3a-armhf-rb NODE2=jtx1a-armhf-rb ;;
- armhf_4) NODE1=cb3a-armhf-rb NODE2=jtx1c-armhf-rb ;;
- armhf_5) NODE1=cbxi4a-armhf-rb NODE2=p64c-armhf-rb ;;
- armhf_6) NODE1=jtx1a-armhf-rb NODE2=ff4a-armhf-rb ;;
- armhf_7) NODE1=virt64a-armhf-rb NODE2=cbxi4b-armhf-rb ;;
- armhf_8) NODE1=p64b-armhf-rb NODE2=virt32b-armhf-rb ;;
- armhf_9) NODE1=virt64b-armhf-rb NODE2=cbxi4a-armhf-rb ;;
- armhf_10) NODE1=virt64a-armhf-rb NODE2=ff4a-armhf-rb ;;
- armhf_11) NODE1=virt32a-armhf-rb NODE2=jtx1a-armhf-rb ;;
- armhf_12) NODE1=ff2a-armhf-rb NODE2=wbq0-armhf-rb ;; # 32-bit
- armhf_13) NODE1=ff2a-armhf-rb NODE2=p64c-armhf-rb ;;
- armhf_14) NODE1=ff2b-armhf-rb NODE2=p64b-armhf-rb ;;
- armhf_15) NODE1=ff2b-armhf-rb NODE2=virt64a-armhf-rb ;;
- armhf_16) NODE1=jtx1b-armhf-rb NODE2=virt32a-armhf-rb ;;
- armhf_17) NODE1=jtx1b-armhf-rb NODE2=jtk1b-armhf-rb ;;
- armhf_18) NODE1=jtk1b-armhf-rb NODE2=virt64a-armhf-rb ;;
- armhf_19) NODE1=virt32b-armhf-rb NODE2=jtx1c-armhf-rb ;;
- armhf_20) NODE1=jtk1b-armhf-rb NODE2=virt64b-armhf-rb ;;
- armhf_21) NODE1=odu3a-armhf-rb NODE2=virt64b-armhf-rb ;;
- armhf_22) NODE1=virt64a-armhf-rb NODE2=odu3a-armhf-rb ;;
- armhf_23) NODE1=ff64a-armhf-rb NODE2=virt32b-armhf-rb ;;
- armhf_24) NODE1=virt32a-armhf-rb NODE2=ff64a-armhf-rb ;;
- armhf_25) NODE1=virt64b-armhf-rb NODE2=opi2a-armhf-rb ;;
- armhf_26) NODE1=virt64b-armhf-rb NODE2=virt32b-armhf-rb ;;
- armhf_27) NODE1=virt32b-armhf-rb NODE2=jtx1b-armhf-rb ;;
- armhf_28) NODE1=opi2a-armhf-rb NODE2=jtx1b-armhf-rb ;;
- armhf_29) NODE1=opi2a-armhf-rb NODE2=cbxi4b-armhf-rb ;;
- armhf_30) NODE1=virt32b-armhf-rb NODE2=virt64b-armhf-rb ;;
- armhf_33) NODE1=virt64a-armhf-rb NODE2=ff2a-armhf-rb ;;
- armhf_34) NODE1=ff64a-armhf-rb NODE2=virt32a-armhf-rb ;;
- armhf_35) NODE1=p64b-armhf-rb NODE2=ff2a-armhf-rb ;;
- armhf_36) NODE1=p64c-armhf-rb NODE2=ff2b-armhf-rb ;;
- armhf_38) NODE1=wbq0-armhf-rb NODE2=ff2b-armhf-rb ;; # 32-bit
- armhf_39) NODE1=virt64c-armhf-rb NODE2=cbxi4a-armhf-rb ;;
- armhf_40) NODE1=cbxi4a-armhf-rb NODE2=jtx1b-armhf-rb ;;
- armhf_41) NODE1=cbxi4a-armhf-rb NODE2=cb3a-armhf-rb ;; # 32-bit
- armhf_42) NODE1=cbxi4b-armhf-rb NODE2=bbx15-armhf-rb ;; # 32-bit
- armhf_43) NODE1=cbxi4b-armhf-rb NODE2=cb3a-armhf-rb ;; # 32-bit
- armhf_44) NODE1=cbxi4b-armhf-rb NODE2=ff64a-armhf-rb ;;
- armhf_45) NODE1=ff4a-armhf-rb NODE2=virt64b-armhf-rb ;;
- armhf_46) NODE1=virt32c-armhf-rb NODE2=jtx1c-armhf-rb ;;
- armhf_47) NODE1=jtx1a-armhf-rb NODE2=cbxi4b-armhf-rb ;;
- armhf_48) NODE1=jtx1a-armhf-rb NODE2=virt32a-armhf-rb ;;
- armhf_49) NODE1=jtx1b-armhf-rb NODE2=bbx15-armhf-rb ;;
- armhf_50) NODE1=jtx1c-armhf-rb NODE2=jtk1a-armhf-rb ;;
- armhf_51) NODE1=jtx1c-armhf-rb NODE2=cbxi4a-armhf-rb ;;
- armhf_52) NODE1=jtx1c-armhf-rb NODE2=odu3a-armhf-rb ;;
- armhf_53) NODE1=jtk1a-armhf-rb NODE2=wbq0-armhf-rb ;; # 32-bit
- armhf_54) NODE1=jtk1a-armhf-rb NODE2=ff64a-armhf-rb ;;
- armhf_55) NODE1=virt32a-armhf-rb NODE2=virt64a-armhf-rb ;;
- armhf_56) NODE1=ff64a-armhf-rb NODE2=virt32c-armhf-rb ;;
- armhf_57) NODE1=p64c-armhf-rb NODE2=opi2a-armhf-rb ;;
- armhf_58) NODE1=odu3a-armhf-rb NODE2=p64b-armhf-rb ;;
- armhf_59) NODE1=virt64c-armhf-rb NODE2=ff4a-armhf-rb ;;
- armhf_60) NODE1=virt64c-armhf-rb NODE2=virt32c-armhf-rb ;;
- armhf_61) NODE1=wbq0-armhf-rb NODE2=virt64c-armhf-rb ;;
- armhf_62) NODE1=ff4a-armhf-rb NODE2=virt64c-armhf-rb ;;
- armhf_63) NODE1=virt32c-armhf-rb NODE2=virt64a-armhf-rb ;;
- armhf_64) NODE1=virt64c-armhf-rb NODE2=jtk1a-armhf-rb ;;
- armhf_65) NODE1=virt32c-armhf-rb NODE2=jtk1b-armhf-rb ;; # 32-bit lpae
- armhf_66) NODE1=ff4a-armhf-rb NODE2=virt32c-armhf-rb ;; # 32-bit lpae
+ armhf_1) NODE1=cbxi4a-armhf-rb NODE2=jtx1a-armhf-rb ;;
+ armhf_2) NODE1=cbxi4b-armhf-rb NODE2=virt64c-armhf-rb ;;
+ armhf_3) NODE1=ff4a-armhf-rb NODE2=jtx1a-armhf-rb ;;
+ armhf_4) NODE1=jtx1a-armhf-rb NODE2=ff4a-armhf-rb ;;
+ armhf_5) NODE1=virt64a-armhf-rb NODE2=cbxi4b-armhf-rb ;;
+ armhf_6) NODE1=virt64b-armhf-rb NODE2=virt32b-armhf-rb ;;
+ armhf_7) NODE1=virt64b-armhf-rb NODE2=cbxi4a-armhf-rb ;;
+ armhf_8) NODE1=virt64a-armhf-rb NODE2=ff4a-armhf-rb ;;
+ armhf_9) NODE1=virt32a-armhf-rb NODE2=jtx1a-armhf-rb ;;
+ armhf_10) NODE1=jtx1b-armhf-rb NODE2=virt32a-armhf-rb ;;
+ armhf_11) NODE1=virt32b-armhf-rb NODE2=jtx1c-armhf-rb ;;
+ armhf_12) NODE1=virt64a-armhf-rb NODE2=cbxi4b-armhf-rb ;;
+ armhf_13) NODE1=ff64a-armhf-rb NODE2=virt32b-armhf-rb ;;
+ armhf_14) NODE1=virt32a-armhf-rb NODE2=ff64a-armhf-rb ;;
+ armhf_15) NODE1=virt64b-armhf-rb NODE2=virt32b-armhf-rb ;;
+ armhf_16) NODE1=virt32b-armhf-rb NODE2=jtx1b-armhf-rb ;;
+ armhf_17) NODE1=virt32b-armhf-rb NODE2=virt64b-armhf-rb ;;
+ armhf_18) NODE1=ff64a-armhf-rb NODE2=virt32a-armhf-rb ;;
+ armhf_19) NODE1=virt64c-armhf-rb NODE2=cbxi4a-armhf-rb ;;
+ armhf_20) NODE1=cbxi4a-armhf-rb NODE2=jtx1b-armhf-rb ;;
+ armhf_21) NODE1=cbxi4a-armhf-rb NODE2=ff64a-armhf-rb ;;
+ armhf_22) NODE1=cbxi4b-armhf-rb NODE2=virt64c-armhf-rb ;;
+ armhf_23) NODE1=cbxi4b-armhf-rb NODE2=ff64a-armhf-rb ;;
+ armhf_24) NODE1=ff4a-armhf-rb NODE2=virt64b-armhf-rb ;;
+ armhf_25) NODE1=virt32c-armhf-rb NODE2=jtx1c-armhf-rb ;;
+ armhf_26) NODE1=jtx1a-armhf-rb NODE2=cbxi4b-armhf-rb ;;
+ armhf_27) NODE1=jtx1a-armhf-rb NODE2=virt32a-armhf-rb ;;
+ armhf_28) NODE1=jtx1c-armhf-rb NODE2=cbxi4a-armhf-rb ;;
+ armhf_29) NODE1=virt32a-armhf-rb NODE2=virt64a-armhf-rb ;;
+ armhf_30) NODE1=ff64a-armhf-rb NODE2=virt32c-armhf-rb ;;
+ armhf_31) NODE1=virt64c-armhf-rb NODE2=ff4a-armhf-rb ;;
+ armhf_32) NODE1=virt64c-armhf-rb NODE2=virt32c-armhf-rb ;;
+ armhf_33) NODE1=ff4a-armhf-rb NODE2=virt64c-armhf-rb ;;
+ armhf_34) NODE1=virt32c-armhf-rb NODE2=virt64a-armhf-rb ;;
+ armhf_35) NODE1=jtx1c-armhf-rb NODE2=virt32c-armhf-rb ;;
+ armhf_36) NODE1=jtx1b-armhf-rb NODE2=virt32c-armhf-rb ;;
*) NODE1=undefined
;;
esac
View it on GitLab: https://salsa.debian.org/qa/jenkins.debian.net/-/commit/dbc29d395582e3ee67769545f4713d3a49a0ad4f
--
View it on GitLab: https://salsa.debian.org/qa/jenkins.debian.net/-/commit/dbc29d395582e3ee67769545f4713d3a49a0ad4f
You're receiving this email because of your account on salsa.debian.org.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://alioth-lists.debian.net/pipermail/qa-jenkins-scm/attachments/20210526/35c17b7c/attachment-0001.htm>
More information about the Qa-jenkins-scm
mailing list