<div dir="ltr"><div>Thanks for the details, they do help to get the picture clearer :)</div><div><br></div><div>Regarding `delay “2” until the end of “1”` my first shot would likely be scripting as suggested in first reply. Since I've mostly dealt with shell scripts (and enabled SSH on ESXi) in my past, I can only elaborate on that :) Probably there are equivalents in Remote API, I've just never used them.<br></div><div><br></div><div>As <a href="https://kb.vmware.com/s/article/1038043">https://kb.vmware.com/s/article/1038043</a> suggests concisely, you can use `<span style="font-family:Courier New">vim-cmd vmsvc/getallvms |grep <vm name></span>` to learn currently registered VMs on a host, and IIRC that reports the "volume" that the VM's disks are stored on. Volume is an UUID and may be mapped to a human-readable name, either way a persistent string that you can "hardcode" in a script config. Whether from `<span style="font-family:Courier New">vim-cmd vmsvc/power.getstate <vmid></span>` or maybe directly from that original query, you can see which VMs are running (replies to the former are quite parsable, like "Power state is <on|off>" IIRC).</div><div><br></div><div>So, there are many ways to skin a cat, but eventually your fileservers can have a way to know if any VMs are still alive and using their resources, or their shutdown can proceed. Notably, this is probably something they should consider (or enforce) during "ordinary" shutdowns, such as non-emergency maintenance, system updates, etc.</div><div><br></div><div>Jim<br></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Sun, Jun 13, 2021 at 5:29 PM Arnaldo Viegas de Lima <<a href="mailto:arnaldo@viegasdelima.com">arnaldo@viegasdelima.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div style="overflow-wrap: break-word;"><div>Hi and thanks for the reply!</div><div><br></div>The “real” scenario is a bit more complex. There is more than one file server and they must be shutdown only after the VM’s are down. And there are 2 VMWare servers...<div><br></div><div>UPS can be connected using USB cable or SNMP. All involved machines and network switches are connected the same UPS. A simplified scenario is:</div><div><br></div><div>1-Server “A” runs VMWare and has a USB connection to the UPS (3KVA APC)</div><div>2-Server “B” also runs VMWare</div><div>3-Servers “C”. And “D” are file servers (Linux based) providing iSCSI disks to the VMWare servers</div><div>4-All machines are connected to one switch that is also connected to the UPS</div><div>5-All VMs running on servers “A” and “B” have their disks coming from servers “C” and “D”, except from the NUT VM</div><div>6-VM running NUT runs from the local filesystem on server “A” (or “B”). This allow for the fileservers to be shutdown independently. </div><div><br></div><div>The requirement is that before servers “C” and “D” can shutdown, all VM’s running on servers “A” and “B” must be properly shutdown.</div><div>I have a script that shutdown (or power down) all VMs on both VMWare servers, except for the NUT VM. Also shutdown is performed in a particular sequence to allow for dependencies, multiple machines at the same time (accounting for disk and CPU bottlenecks) and for VM’s without VMWare/Open VMTools. The script uses VMWare remote API (much like their Web based management). This part is great and works very nice. </div><div><br></div><div>I also have a script, that fired from the NUT VM that commands a delayed shutdown of the host VMWare system (that should have no VM’s running except the NUT one) and shuts itself down (ahead of the host VMWare). It will also command the UPS off.</div><div> This is the shutdown command for the master upsmon. </div><div><br></div><div>Those 2 pieces work fine, if tested independently.</div><div><br></div><div>My problem is that after shutdown all VMs and before shutting down the NUT VM and associated VMWAre host, I must shutdown the fileservers “C” and “D”. I can do that running upsmon in slave mode. But I must ensure that it will not happen before all VMs are properly down. So there are 3 synchronous steps, after OB+LB:</div><div><br></div><div>1-Shutdown all VMs (except NUT’s)</div><div>2-Shutdown fileservers</div><div>3-Command VMWare and UPS delayed off and shutdown NUT VM.</div><div><br></div><div>Since 3 is a script, I can have it wait for 2 to complete (time or use a network based test). My main problem is to delay “2” until the end of “1”. </div><div><br></div><div>Thanks again</div><div><br></div><div>Arnaldo.</div><div><div><br><blockquote type="cite"><div>On Jun 13, 2021, at 10:38 AM, Jim Klimov <<a href="mailto:jimklimov@gmail.com" target="_blank">jimklimov@gmail.com</a>> wrote:</div><br><div><div dir="ltr"><div>Hello,</div><div><br></div><div>Just to clarify: you have one VM acting as a NUT server and talking to the UPS somehow (networked? pass-through usb/serial media?), another machine (physical? VM?) acting as the file server for all VMs - is that including the VM with NUT? And there's also a VMWare server itself? Are VMs also NUT clients?</div><div><br></div><div>If the file server is physical, is it connected to VMWare directly or using another powered device like a switch (and is that UPS-protected then?) Similarly for possible networked connection to the UPS.<br></div><div><br></div><div>Is there a particular reason to not have your file server the NUT server as well (more so if it is a separate physical machine)?</div><div><br></div><div>When NUT ecosystem shuts down due to a critical power situation (roughly: too few power sources remain alive/online, and others are on battery and low battery), there is a relatively short timeframe that upsmon or similar (upssched, etc.) clients of secondary ("slave") systems are shutting down first, and when they are all gone or a configured time limit elapses, the client on primary ("master") system begins its shutdown. If the setup involves an UPS smart enough, it is also told to power off and wait for "wall power" to appear and then go up (maybe when charged to a sufficiently safe level first).</div><div><br></div><div>So in the setup you propose,</div><div>1) power goes critical</div><div>2) your VMs (except the NUT server VM) begin to shut down - either as NUT clients, or via vim-cmd, esxcli or similar scripting... and probably requiring open-vm-tools or similar to process the shutdown request gracefully - not sure there is a "virtual ACPI power button" in VMWare.</div><div>3) all VMs are down, and a time-sensitive end-game occurs:</div><div>* NUT server VM tells itself to shut down (and tells the UPS to power off, hopefully with a sufficiently large timeout if it has a way to set that),</div><div>* File server is told to go down (and yank the disk from NUT server VM?)</div><div>* VMWare server is told to go down (and yank CPU/RAM/... from NUT server VM)</div><div><br></div><div>At least with SSH allowed on VMWare server and some vim-cmd scripting, or possibly with vmware power shell (never used that myself), you should be able to find which VMs rely on the data store or "volume" served by the file server, and which of those VMs are running. If you script that into the shutdown routine of the file server, and if the shutdown timeframe is not dictated by its OS (e.g. disable or make very long the timeouts in systemd), you can block the file server shutdown from proceeding until no VMs are running served from its disks. Assuming that the networked connection survives long enough, it can detect that your NUT VM went down and so proceed with its own shutdown which began when power went critical for every secondary client.</div><div><br></div><div>Maybe similar scripting is feasible in the NUT client that runs on VMWare server itself to drive its shutdown after all VMs went down. Otherwise, the file server going down, if it has a session to check VM states anyway, could tell the VMWare server to initiate its shutdown.</div><div><br></div><div>But it all looks like a lot of ropes hanging around and waiting for stuff to go wrong - and during an outage, I/O stress to flush the disks, delays of orderly service stacks shutdowns (DB users first, databases next, ...) and so on, things are very likely to do go wrong :)</div><div><br></div><div>It feels that the physical machine, likely your fileserver, is better positioned to be the NUT server and so shut down last, after all its clients have gone down or the timeout expired or the battery die<snip>.... At least, in such setup it only relies on connectivity between upsd and upsmon's, and the amount of clients still alive (last heartbeat recently, connection not terminated via protocol), and possible loss of connectivity during a known power outage is something NUT already has logic for.<br></div><div><br></div><div>Jim<br></div><div><br></div><div><br></div><div><br></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Sat, Jun 12, 2021 at 11:15 PM Arnaldo Viegas de Lima <<a href="mailto:arnaldo@viegasdelima.com" target="_blank">arnaldo@viegasdelima.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">HI, <br>
<br>
I’m setting NUT to run on a VMWare server (running in a VM) to shutdown the server as well as the companion file server (running Linux), that will be running upsmon in slave mode. All VMs disks come from the file server, so all VMs must properly terminate ahead of the file server.<br>
<br>
The order of the tasks needed for the proper shutdown is:<br>
<br>
1-Shutdown all running VMs (except the one controlling the UPS) and confirm they are down<br>
2-When all VMs are terminated, signal slaves (FSD)<br>
3-When the file server is down, properly terminate VMWare.<br>
<br>
Any ideas on how to sync these events?<br>
<br>
Thanks in advance,<br>
<br>
Arnaldo.<br>
_______________________________________________<br>
Nut-upsuser mailing list<br>
<a href="mailto:Nut-upsuser@alioth-lists.debian.net" target="_blank">Nut-upsuser@alioth-lists.debian.net</a><br>
<a href="https://alioth-lists.debian.net/cgi-bin/mailman/listinfo/nut-upsuser" rel="noreferrer" target="_blank">https://alioth-lists.debian.net/cgi-bin/mailman/listinfo/nut-upsuser</a><br>
</blockquote></div>
</div></blockquote></div><br></div></div>_______________________________________________<br>
Nut-upsuser mailing list<br>
<a href="mailto:Nut-upsuser@alioth-lists.debian.net" target="_blank">Nut-upsuser@alioth-lists.debian.net</a><br>
<a href="https://alioth-lists.debian.net/cgi-bin/mailman/listinfo/nut-upsuser" rel="noreferrer" target="_blank">https://alioth-lists.debian.net/cgi-bin/mailman/listinfo/nut-upsuser</a><br>
</blockquote></div>