[Nut-upsdev] Changes to upscli_connect (and general discovery)

Arnaud Quette aquette.dev at gmail.com
Thu Jun 30 20:52:07 UTC 2011


2011/6/29 Arjen de Korte <nut+devel at de-korte.org>

> Citeren Arnaud Quette <aquette.dev at gmail.com>:
>
>
>  I guess I see the scanning code as a stopgap way to contact "legacy"
>>> servers (or what would be legacy after some discovery protocol like mDNS
>>> is
>>> set up), and either timeouts or non-blocking is just a kludge to make
>>> that
>>> work a little better. And isn't opening a non-blocking socket just a way
>>> to
>>> split socket connection and protocol initialization?
>>>
>> indeed.
>>
>
> This isn't needed. Probing ranges of hosts listening on an arbitrary port
> doesn't require the upsclient library. For the hits, connect through the
> usual upscli_connect method, which by then you know will not block.


well, though I agree with your POV (a simple connect to check for possible
NUT presence, with timeout management), there is an interest in providing
the user audience (ie developers here) a tryconnect() method with a common
code.

using the simple connect method, you're also still open to false positive,
even if we're using a IANA port.

so I don't see why adding ~ 40 lines on 1000, with a new feature, while
keeping the current behavior intact would pose a problem?! we're still in
NUT source code, with the same code duplication policies and maintenance
optimization quest.


>  as for the compat, IIRC Fred proposed a a new _tryconnect() method, upon
>> which the standard _connect() would be mapped, with no behavior change.
>>
>
> It is still unclear to me what we're trying to accomplish here, since
> auto-configuration is never going to work in environments where it is most
> needed (multi-UPS).
>

Charles already answered partially: it's about making configuration easier,
but not automagic!

Now, I have to provide more details since there was not much highlight on
this point in:
https://wiki.ubuntu.com/ServerOneiricInfraPower.

All this is mostly related to Cloud, at least it's a point of focus for a
first run.

1) the first part is about deployment, with an "infrastructure server" (IS)
that does provisioning and act as a PXE server.

There, nut-scanner on the IS scans the network and presents the user with a
list of UPSs, PDUs and IPMI servers' PSUs. Node deployed through PXE get:
- NUT installed automatically, if needed (ie provisioned)
- (pre)configuration pushed from IS through puppet / mcollective
- (post)configuration can be appended, in case there are also (or only)
local devices, by running post install script that also use nut-scanner.

2) the 2nd is about allowing 3rd party supervision tools and NMS to discover
power devices and NUT instances, and track the whole PowerChain, from SW to
main.

in other word, this is just a quicker and lazier way to reach large network
monitoring, while waiting for a Power Device MIB that could suit our present
and future needs.
(If somebody is still reading this mail and interested in helping there,
ping me back)

cheers,
Arnaud
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.alioth.debian.org/pipermail/nut-upsdev/attachments/20110630/436b523a/attachment.html>


More information about the Nut-upsdev mailing list