[Git][qa/jenkins.debian.net][master] r.d.n.: infom07+08 have the same disk layouts now

Holger Levsen (@holger) gitlab at salsa.debian.org
Thu Apr 17 12:19:43 BST 2025



Holger Levsen pushed to branch master at Debian QA / jenkins.debian.net


Commits:
e6fdb608 by Holger Levsen at 2025-04-17T13:19:27+02:00
r.d.n.: infom07+08 have the same disk layouts now

Signed-off-by: Holger Levsen <holger at layer-acht.org>

- - - - -


5 changed files:

- README.infrastructure
- TODO.infrastructure
- TODO.r.d.n
- bin/debrebuild_cache_limiter.sh
- update_jdn.sh


Changes:

=====================================
README.infrastructure
=====================================
@@ -58,7 +58,7 @@ The nodes are used for these jobs:
 
 * infom01-amd64, doing Debian r-b CI builds
 * infom02-amd64, doing Debian r-b CI builds, running in the future
-* infom07-i386, running rebuilderd at https://i386.reproduce.debian.net
+* infom07-i386, running rebuilderd-worker for https://i386.reproduce.debian.net
 * infom08-i386, running rebuilderd-worker for https://i386.reproduce.debian.net
 
 
@@ -76,21 +76,29 @@ $ openstack server list
 # accesssing the console via webbrowser
 #
 $ openstack console url show 3f0aa9aa-14e3-4ff6-9616-a4d4ee7024e7
+# or
+$ openstack console url show infom09
+
+#
+# checking the console
+#
+$ openstack console log show infom09
 
 #
 # hard reboot
 #
-$ openstack server reboot --hard 3f0aa9aa-14e3-4ff6-9616-a4d4ee7024e7
+$ openstack server reboot --hard infom09
+
 
 #
 # booting a rescue image
 #
 $ openstack image list | grep -i rescue
-$ openstack server rescue --image 'Infomaniak Rescue Image' infom02
+$ openstack server rescue --image 'Infomaniak Rescue Image' infom09
 # or:
-$ openstack image rescue --image 01b7ce6e-5f6a-47c3-ad56-be0385698e40 infom02
+$ openstack server rescue --image 01b7ce6e-5f6a-47c3-ad56-be0385698e40 infom09
 # then, afterwards:
-$ openstack server unrescue infom02
+$ openstack server unrescue infom09
 
 more information: https://docs.infomaniak.cloud/ or openstack help or openstack server help
 to pre-calculate monthly costs: https://www.infomaniak.com/en/hosting/public-cloud/calculator
@@ -98,6 +106,11 @@ to pre-calculate monthly costs: https://www.infomaniak.com/en/hosting/public-clo
 https://api.pub1.infomaniak.cloud/horizon/auth/login/?next=/horizon/project/instances/
 username is the usernname, password is the password for that.
 
+#
+# setting a password for the rescue system
+#
+$ openstack server set --property rescue_pass=supersecretpw1234 infom09
+
 ===== resources usage
 
 $ openstack rating summary get


=====================================
TODO.infrastructure
=====================================
@@ -19,9 +19,6 @@ ordered todo
 ionos*i386: increase ram to 16gb but dont increase workers yet
 infom08
 	remove debian (uid 1000) user (also on 01+02)
-	document kernel (non 32bit) variation (but bpo variation)
-	document amd64 userland (and pbuilder i386 userland)
-mix ionos+infom builders for workers?
 djm:
 	codethink nodes: make powercycle via djm work
 	infomaniak nodes: make powercycle via djm work


=====================================
TODO.r.d.n
=====================================
@@ -1,3 +1,11 @@
+fix riscv64 nodes:
+	- worker use special worker/%n directory, use that for all?
+	- keep connections on ssh tunnels, also for armhf
+	- riscv64-01 and -02 have the same ssh host keys...!?
+add more diskspace to i17
+	mv i1 + i11 sdd to hdds and use those two free hdds there
+	-> give more diskspace to i17
+
 RFH running r.d.n
 	- index.html:
 	  - should show $arch+all combined stats (MR189)
@@ -21,11 +29,6 @@ file *important* bugs about arch:all issues just needing no source change upload
 	we will NMU
 	please join the fun!
 	sudo apt install debian-repro-status ; debian-repro-status
-fix riscv64 nodes:
-	- worker use special worker/%n directory, use that for all?
-	- keep connections on ssh tunnels, also for armhf
-	- riscv64-01 and -02 have the same ssh host keys...!?
-	- bullseye chdist on riscv64? (see maintenance jobs)
 jenkins tests for desktops etc: run debian-repro-status and graph the results
 cleanup setup.html 
 https://r.d.n/ improvements
@@ -34,7 +37,6 @@ https://r.d.n/ improvements
 	add thanks to osuosl for o5 and ionos for jenkins.d.n too
 until debrebuild does it by itself:
 	file wishlist bug for --max-cache-size option alongside with --cache
-drop i7 extra partition to save infomaniak credits
 check rebuilderd uid+gid everywhere
 update README and THANKS
 	ppc64el


=====================================
bin/debrebuild_cache_limiter.sh
=====================================
@@ -15,8 +15,7 @@ case $HOSTNAME in
 	codethink*)		LIMIT=12  ;;
 	osuosl*-amd64)		LIMIT=333 ;;
 	osuosl*-ppc64el)	LIMIT=100 ;;
-	infom07*)		LIMIT=150 ;; # FIXME: drop extra partition again
-	infom08*)		LIMIT=50 ;;
+	infom07*|infom08*)	LIMIT=100 ;;
 	riscv64*)		LIMIT=180 ;;
 	*)		echo "Limit for $HOSTNAME not defined."
 			exit 1 ;;


=====================================
update_jdn.sh
=====================================
@@ -972,8 +972,6 @@ deploy_rebuilderd_services() {
 case $HOSTNAME in
 	codethink01*|codethink02*)		deploy_rebuilderd_services worker 4
 						;;
-	infom07*)				deploy_rebuilderd_services worker 5
-						;;
 	infom08*)				deploy_rebuilderd_services worker 3
 						;;
 	ionos17*)				deploy_rebuilderd_services worker 8



View it on GitLab: https://salsa.debian.org/qa/jenkins.debian.net/-/commit/e6fdb60834cf5dae73f4bcec7de1c4032809b232

-- 
View it on GitLab: https://salsa.debian.org/qa/jenkins.debian.net/-/commit/e6fdb60834cf5dae73f4bcec7de1c4032809b232
You're receiving this email because of your account on salsa.debian.org.


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://alioth-lists.debian.net/pipermail/qa-jenkins-scm/attachments/20250417/163e0c6f/attachment-0001.htm>


More information about the Qa-jenkins-scm mailing list