BRRABill's Field Report With XenServer
-
I think this is because you are trying to update while VMs are running and if it does that, they are going to go down and if they are not set to autostart they are not going to come back up at all. XenServer doesn't want to induce an unexpected outage.
-
Yeah I am wondering why it would assume
a) I wanted to migrate and
b) I had another server to migrate to -
If you have the autostart flag set on your VM's it makes that assumption that you're attempting a Failover system where the VM's can get migrated to another host.
Autostart was actually removed from XC as a default option, it must be enabled via the CLI.
Disable Autostart on these VM's and try again.
-
@DustinB3403 said:
If you have the autostart flag set on your VM's it makes that assumption that you're attempting a Failover system where the VM's can get migrated to another host.
Damn awesome answer! much better than my - If you're kicking off a hypervisor upgrade while a VM is running, wouldn't you assume you want it migrated?
-
@BRRABill said:
Yeah I am wondering why it would assume
a) I wanted to migrate andSee my previous reply
b) I had another server to migrate to
It's ok to assume this - it wasn't a hard failure. If you had one, it would migrate, if not, it would log it and move on.
-
You would, but the autostart flag is for after a power outage etc, not a migration, as the VM never gets "powered off"
Upgrading the host, attempts to migrate the VM to any other host in the pool, as its assuming you want to keep it running, since the Autostart flag is enabled.
Otherwise it'll say "you need to shutdown these VM's"
-
And it always has to shut down the VMs when it does an upgrade?
I know Windows requires a lot of reboots, but they generally run the updates while everything is running.
This is more informational knowledge. I could have just suspended the VMs as it asked. Just curious as to how the sausage is made. (Boy I am full of cliches today.)
-
I guess I don't understand the mechanics of upgrading here. I mean, you suspend the VM, take the 60 seconds to upgrade, and then restart it.
To migrate some VMs would take hours. Is that more in the scenario (of which I am not in even remotely) of VMs that can never be down?
-
@BRRABill Xen would simply put the VM's in a suspended state, unless there is a known need to reboot the host. In which case it tells you, hey move these VM's or shut them down.
-
@BRRABill said:
I guess I don't understand the mechanics of upgrading here. I mean, you suspend the VM, take the 60 seconds to upgrade, and then restart it.
To migrate some VMs would take hours. Is that more in the scenario (of which I am not in even remotely) of VMs that can never be down?
The mechanic is looking at what you have and trying to make a determination of what you are attempting to do. aka HA. Which if you don't have, you need to power off the VM's and disable autostart so the installation can complete.
-
@DustinB3403 said:
@BRRABill Xen would simply put the VM's in a suspended state, unless there is a known need to reboot the host. In which case it tells you, hey move these VM's or shut them down.
Got it.
And I am assuming XS works like other hypervisors on suspend? Conceptually at least?
-
@BRRABill I believe the job is failing because a vm that has autostart set on can't failover to another host. If it did and the original host once it rebooted will try to "autostart" all the vms again causing you to have the same vm active in 2 different hosts.
-
@BRRABill said:
@DustinB3403 said:
@BRRABill Xen would simply put the VM's in a suspended state, unless there is a known need to reboot the host. In which case it tells you, hey move these VM's or shut them down.
Got it.
And I am assuming XS works like other hypervisors on suspend? Conceptually at least?
Yes
-
I was discussing a bit offline with @DustinB3403 about switching my XS install over to boot off of USB.
I know it is the ML recommendation, but as always, I am questioning the thinking.
Perhaps it is just my scenario, but I'm not understanding the advantage of doing it.
In a server where all the data is stored on one array, what is the disadvantage of booting off this array as well? I understand that if the array goes down you could continue to boot off the USB, but if the array goes down, you have bigger issues to deal with anyway. As @scottalanmiller always says XS is very easy to install. Set up the new array, install XS, and restore your VMs.
How does booting it off USB save any work in restoring the VMs? Maybe the 5 minutes it takes to install XS.
Now, if you are hosting hundreds of VMs and have to set them all back up, I could see. But it still would seem to be a substantial task if that array were to go down.
I understand there is a small storage hit, but XS is so small, I don't see the advantage there, either.
So, as another thread this week said, I'm not accepting, but questioning.
-
Simple answer is... if your array fails you want as much power as possible to repair it. If your XS install is on the failed array, you have a lot more work to do at a time when that's the last thing that you want. Losing your array AND the tools necessary to recover the array all at once really, really sucks. Considering that the fix is incredibly trivial, why give up so much power?
Also, if you need to roll back a failed patch or upgrade to XS and you are installed to the local storage, how do you do it? This is trivial with USB/SD storage.
-
@BRRABill As discussed offline, and @scottalanmiller mentioned it.
If you lose the array, which is also hosting the Hypervisor installation your ability to recovery in a timely manner is greatly decreased. The array is down, which host the XS, so how do you repair it?
Included is your VM's are down and unusable unless you can migrate them.
By putting the XS installation on USB/SD card you're not risking the array and options to roll back a system update or dead USB/SD card.
Let the array act as block storage, not as the boot device.
-
@scottalanmiller said
Simple answer is... if your array fails you want as much power as possible to repair it. If your XS install is on the failed array, you have a lot more work to do at a time when that's the last thing that you want. Losing your array AND the tools necessary to recover the array all at once really, really sucks. Considering that the fix is incredibly trivial, why give up so much power?
But how is it a lot more work? You've said many times XS is a breeze to install. Fix the array (which probably means recreating it from scratch if the drives have failed), reinstall XS, restore the VMs. I don't see how the USB saves you time here.
And what tools do you mean? Unless you are talking software RAID, which I was not considering in my argument, but is a valid point.
Unless this whole thing is in the discussion of software problems on the array. That is something I was not thinking about. That you could inadvertently hose your array without the hardware component of it ever failing.
Is that what you mean?
-
@BRRABill XS is super simple to install, but if your XS installation is fried (and built on the array) you have to reinstall XS and import from your backups.
But if you fry a USB drive, you just shutdown the host, and plug in your cloned XS Bootable USB.
The VM's are intact, and you recovered in the time to shutdown the server and connect a working USB (that has your customizations already configured, SR configured, hardware and everything)
-
@BRRABill said in BRRABill's Field Report With XenServer:
@scottalanmiller said
But how is it a lot more work? You've said many times XS is a breeze to install. Fix the array (which probably means recreating it from scratch if the drives have failed), reinstall XS, restore the VMs. I don't see how the USB saves you time here.
That XS is easy to install isn't the issue, that's beside the point. It's dealing with the array, which is not easy to recreate, that is the issue.
Installing XS after a failure is silly, have it ready to go before a failure.
Restoring an array when you've lost the array controller is a big deal, I can't understand why you'd even consider opting to have this in your process. This could easily be what kills you and causes data loss. It's time consuming, complex and a lot of risk. For what purpose?
Basically, you are looking at going against the advice of every hypervisor vendor and the industry which are recommend for a reason and doing something only nominal advantageous (saving what, $10?) but... why? You are trying to downplay the advantages, but you are failing to explain why "just a little worse" isn't still "worse."
-
Do a Pros and Cons list. Pros to using SD / USB are solid. Maybe not epic, but they are there. Cons are... what? What factors are driving you to want to question a nearly universal industry standard from both the IT and the vendor sides?
Not that questioning is not good, but industry accepted best practices normally exist for extremely strong reasons. Reinvesting the wheel or approaching things from a "I must be a special case" are basically always wrong.