[Reproducible-builds] Performance of armhf boards

Vagrant Cascadian vagrant at debian.org
Mon Apr 18 02:55:23 UTC 2016


On 2016-04-17, Steven Chamberlain wrote:
> I was wondering what is the performance of various armhf boards, for
> package building.  Of course, the reproducible-builds team have a lot of
> stats already.  Below I'm sharing the query I used and the results in
> case anyone else is interested in this.

Thanks for crunching some numbers and sharing!

Somewhat similar numbers are calculated daily:

  https://jenkins.debian.net/view/reproducible/job/reproducible_nodes_info/lastBuild/console


> Figures will vary based on which packages were assigned to which
> node, as some are easier to build than others, but I hope over 21
> days that variance is smoothed out.

I wonder if 21 days is long enough to average things out, some builds
normally take nearly 12 hours or more, while some take only a few
minutes.


> Assuming the nodes had no downtime, we can compare pkgs_built over
> the 21-day period to assess performance.


There has definitely been downtime... particularly odxu4c, ff4a and
bbx15. And many of the systems occasionally get rebooted for testing and
upgrading u-boot or the linux kernel.

FWIW, 15 of 18 nodes are running kernels coming from the official
debian.org linux packages from jessie, jessie-backports, sid or
experimental! (only 9/18 for u-boot)


> Also avg_duration is meaningful, but will increase where the
> reproducible-builds network scheduled more concurrent build jobs on a
> node.  (Low avg_duration does not always mean high package throughput,
> it may just be doing fewer jobs in parallel.)
>
> Finally, the nodes' performance will depend on other factors such as
> storage device used, kernel, etc.

I've often wondered what the impacts are if "fast" nodes are mostly
paired with "slow" nodes for the 1st or 2nd builds, since each build job
is specifically tied to two machines. This was one of the factors
leading me to build pools based on load... but haven't had the time to
implement that.


> I don't know whether to believe these figures yet!
>
>   * wbq0 is impossibly fast for just 4x1GHz cores, 2GB RAM...

My guess is that it is one of the most stable. Only tends to be rebooted
for updates.


>   * odxu4 looks slightly faster than the other two.

That's tricky to track down, as odxu4c has had stability issues, and
odxu4 also to a lesser extent, and odxu4b has been relatively stable.

Many of the machines have different brand/model SSDs, so I was thinking
of comparing that against build stats on all the nodes to see if there's
a pattern. They're all pretty cheap SSDs, so I wouldn't be surprised if
there was significant variation in performance.


>   * cbxi4a/b seem no faster than cbxi4pro0 despite twice the RAM?

That is definitely surprising (although technically they only have
access to 3.8GB, but still!). They seem to be doing better according to
the daily averge stats.

195.1 builds/day (13075/67) on cbxi4b-armhf-rb.debian.net
188.2 builds/day (14121/75) on cbxi4a-armhf-rb.debian.net
172.9 builds/day (22658/131) on cbxi4pro0-armhf-rb.debian.net


>   * ff2a/b show USB3 SSD to be no faster than USB2?

All of the Firefly boards are USB2, I think. ff2a was running with only
512MB for a few weeks due to a u-boot I didn't notice until recently.


>   * bbx15 may be able to handle more build jobs (low avg_duration).

That's really impressive, because sometimes it's running 6 concurrent
builds, and only has two cores. It is a higher-performance cortex-a15.


>   * bpi0 may be overloaded (high avg_duration).

That curious. Not sure what to make of it.


>   * ff4a maybe had downtime, and seems to be under-utilised.

Yeah, it's had some multiple-hour stretches of downtime regularly,
partially due to stability issues, and partially due to kernel/u-boot
testing.


>   * rpi2b maybe had downtime, or has a slower disk than rpi2c.

Those numbers look surprising, especially since rpi2c has been rebooted
more often.

I'm also not sure if the rpi2 processors are running at full speed since
I switched to using the debian.org provided kernels, which don't have
cpufreq support.


>   * wbd0 slowness is likely due to the magnetic hard drive.

The disk was upgraded to an SSD at some point, although I'm suspecting
performance issues due to wear-leveling, as it's a smaller SSD and TRIM
isn't supported over any of the USB-SATA adapters I've found.


> Many thanks to Vagrant for hosting all these armhf nodes!

Thanks for taking a fresh look at it and making some suggestions of
things to look into!


live well,
  vagrant
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 818 bytes
Desc: not available
URL: <http://lists.alioth.debian.org/pipermail/reproducible-builds/attachments/20160417/e12a7871/attachment.sig>


More information about the Reproducible-builds mailing list