[Nut-upsdev] nut_clock_* unit test ideas
Arnaud Quette
aquette.dev at gmail.com
Mon Oct 22 08:45:39 UTC 2012
2012/10/16 <VaclavKrpec at eaton.com>
> Hello Arnaud, Charles,
>
Hi Vasek,
> regarding the nut_clock_* iface unit testing,
> I came to the following conclusion:
>
> 1/ It is IMO generally impossible to do deterministic
> unit test; the problem in question is inherently
> non-deterministic (I mean non-det. by nature).
> Justification: the main reason is that we can't
> perfectly predict the CPU time spent on computation,
> mainly because we have multi-tasking; there might
> be any number of context switches between almost
> any states, so if we measure time with us/ns precisions,
> we can't expect fitting times at that scale.
>
Charles has already answered, and this is also my stance:
we are not in hard RT context.
we just want to ensure that a 5 seconds timer/sleep/whatever, will actually
fires within 10 %, rounded to the prox. boundaries.
i.e, 5 seconds +/- 10 % => between 4 to 6 seconds actually
> 2/ This means that there will always be a possibility
> of false-positive unit test case failure; it is therefore
> inevitable to try to
> i) decrease the case possibility to minimum
> ii) think about whether we want to invalidate a build if
> such a UT fails
>
as told above, 10 % is fine.
up to 20 % (below, we are still in the rounding), we are more in a warning
area.
beyond that, the deviation is an error.
ad i) The only thing I came with is usage of statistic
> techniques: we should randomise the unit test and/or
> run it multiple times. The meaningful deviation from
> expected result shall be the avg/mean deviation (with all
> the standard statistic tricks applied --- like extremes
> cuts etc). I also suggest that if there's a failure,
> we could re-run the very same test case another time to see
> whether the next run fails, again (thus eliminating possibility
> of a temporary influence etc).
>
the multi run is indeed a good thing.
I'm not sure on the random side usefulness though.
for ex, 5 runs with 4 under 10 % and 1 above the 20 % would be considered
good...
ad ii) I think the above should be quite enough to eliminate
> false positive UT failure reports. However, it's still not
> certain (and can never be). Therefore, we should decide whether
> we want to be pessimistic or optimistic by default, i.e. whether
> to invalidate the build or to just send a warning or something...
>
> 3/ The methods I'd use are actually quite simple; for the timestamps
> difference (i.e. typical usage of the monotonic clock) validity
> checking, I'd simply use select with ho descriptors and a timeout.
> I'd take a timestamp before, after and compare the diff. with
> the timeout.
> In pseudo-code:
>
> nut_clock_timestamp(&ts);
>
> select(0, NULL, NULL, NULL, &to);
>
> diff = nut_clock_sec_since(&ts);
>
> ut_pass = abs((to.tv_sec + to.tv_usec / 1000000.0) - diff) < sigma;
>
> (Note that this pseudo-code doesn't include application of the statistic
> methods discussed above.)
>
works for me.
> I believe that cca 10 us precision is actually far the best we can hope
> (actually, I think that we could accept even worse precision regarding
> our practical needs). This doesn't apply to the case of using the
> time_t fallback, of course.
>
> As of the RTC unit testing, I think that we might be OK with unit-sec
> precision; if so, we can simply exec "date +%s" to make an "external
> authority" to give us seconds since the Epoch and compare this with
> what our iface provides. Note that there is generally the problem
> of "what if the UT is executed exactly at the time of clock change...?"
> Yes, that's a bugger (sorry for my Greek), but the repeated execution
> should filter such problems out (see 2/i).
>
>
> ... and that's it, actually.
> I'll be more than glad to know if you have better ideas;
> I know that's not much, but right now, it's all I've got
> (I mean all what can be simply and safely done automatically upon
> every build.)
>
that is already a good base for unit tests.
> Note that the statistic methods make the UT quite time consuming;
> also note that it would actually be great if we could automate
> tests like "shifting the system clock and check that the diff.
> of 2 timestamps is still OK --- i.e. automatic UT of monotonic
> clock. However, I wouldn't want to actually mess with the buildslave
> system clock... It could be done on a virtual machine though;
> so we would need a VM/QEMU and do make test, there...
> That could be possible, but also quite complicated.
>
yup, that's still a point I don't have a conclusion yet:
QRT (QA regression testing) should have a test with time set back (and/or
forth), and check that this has no incidence on NUT timers.
the thing is that, executing that from the buildslaves may potential break
the system, or generate issue.
this needs to be checked. but spawning a VM to execute this may solve the
point.
> And that's indeed all I have.
>
> As always, any comments, suggestions etc welcome,
>
>
done... with a bit of lag.
cheers,
Arnaud
--
Linux / Unix / Opensource Engineering Expert - Eaton -
http://opensource.eaton.com
Network UPS Tools (NUT) Project Leader - http://www.networkupstools.org
Debian Developer - http://www.debian.org
Free Software Developer - http://arnaud.quette.fr
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.alioth.debian.org/pipermail/nut-upsdev/attachments/20121022/5026fd59/attachment.html>
More information about the Nut-upsdev
mailing list