Reconsidering ProxMox
-
@VoIP_n00b said in Reconsidering ProxMox:
@scottalanmiller said in Reconsidering ProxMox:
We do, shouldn't, but we do because customers don't want to pay for better backups.
I was referring to this ^ Snapshots are not a backup, just like RAID is not a backup.
Except that's not what you said. That's correct and good thinking, but you said this in response to be discussing snapshots as a mechanism for backups, which is how Proxmox (and everyone else) uses them. That's why you take snapshots primarily.
No one was talking about snapshots AS a backup, but snapshot capability being required for Proxmox to make backups (or other tools to do so.)
Snapshots are not a backup. I didn't say the customers didn't have backups, I said that they used snapshots to make their backups... as that's what everyone does almost. Most people swear by it. Hypervisor level backups are always snapshot based, and almost all non-hypervisor agent based backups are as well, but not 100%. Windows backup is, however.
-
@VoIP_n00b said in Reconsidering ProxMox:
@scottalanmiller said in Reconsidering ProxMox:
Yeah, just for some reason in that table it shows that you can't. My guess is that it is a typo, but I've not tested it so I can't confirm.
It's defiantly not a typo. I am the member of a proxmox user group, and someone just did a install with LVM and everything related to snapshots was grayed out. Then we figured out it was LVM and not LVM-thin. So regardless if it's "all the same" it matters to proxmox.
Well that sucks. But why were they deploying LVM rather than LVM-thin?
-
@scottalanmiller said in Reconsidering ProxMox:
@biggen said in Reconsidering ProxMox:
@scottalanmiller What’s your storage configuration like?
I’ve been playing with it on ZFS Raid 1 mirror. Proxmox OS and VMs all on same mirror. Performance is “OK”. Not as good as MD with same setup though.
Wonder if it’s better to create separate Raid 1 ZFS pools. One for the Proxmox OS and one for the VMs.
We don't use ZFS - slow and we don't want its features (few actually do.) LVM is what we use. What is making you want to look at ZFS? It's not meant for speed and has little generally purpose these days. It's not bad, but mostly it's deployed by accident when people aren't sure what it is. Then people swear by "features" that everything has thinking they are unique to ZFS.
ZFS is a great system, with niche applicability.
I wanted to just mirror a SSD pair but thought the only way to "officially" to that with Proxmox was ZFS since they don't support MD.
-
@VoIP_n00b said in Reconsidering ProxMox:
@stacksofplates said in Reconsidering ProxMox:
If I snapshot a VM and copy the snapshot off somewhere else it's a backup.
Sure, agreed. But that's not what @scottalanmiller said.
It's exactly what I said, because I was discussing backups. I shouldn't have to explain to someone what a backup is every time. Your logic seems to be that because I said "backup", but didn't explain it to you like you were totally clueless, that I must not have meant backup because I should have assumed that you didn't know what backups were?
That makes no sense.
-
After all of this, I still don't get the use case for LVM backed VMs. Other than possibly, possibly a super IO heavy database. Even then, it's questionable.
-
@biggen said in Reconsidering ProxMox:
@scottalanmiller said in Reconsidering ProxMox:
@biggen said in Reconsidering ProxMox:
@scottalanmiller What’s your storage configuration like?
I’ve been playing with it on ZFS Raid 1 mirror. Proxmox OS and VMs all on same mirror. Performance is “OK”. Not as good as MD with same setup though.
Wonder if it’s better to create separate Raid 1 ZFS pools. One for the Proxmox OS and one for the VMs.
We don't use ZFS - slow and we don't want its features (few actually do.) LVM is what we use. What is making you want to look at ZFS? It's not meant for speed and has little generally purpose these days. It's not bad, but mostly it's deployed by accident when people aren't sure what it is. Then people swear by "features" that everything has thinking they are unique to ZFS.
ZFS is a great system, with niche applicability.
I wanted to just mirror a SSD pair but thought the only way to "officially" to that with Proxmox was ZFS since they don't support MD.
Ah okay. That makes sense. In that case, you are free to disable all that ZFS stuff that causes problems and just treat it like any normal system. Then it won't use all that bloated RAM.
But that's really good to note that MD isn't officially supported by the Proxmox layer. Not a big deal, as it is just Debian under there and it is officially supported by Debian. So you can use it and Proxmox doesn't care. But if you are paying for support, you'd definitely want to stick to ZFS RAID so that you have a throat to choke under contract.
I think either way is fine, just be aware that ZFS requires a lot more manual work from you to get running well. But if you are going to set up MD manually, it might be still easier to let PM manage ZFS and just tune it by hand, rather than installing and setting up MD by hand.
-
@stacksofplates said in Reconsidering ProxMox:
After all of this, I still don't get the use case for LVM backed VMs. Other than possibly, possibly a super IO heavy database. Even then, it's questionable.
That's roughly it, and yes, it remains questionable at the best of times.
In the cases where you need LVM fat, you almost certainly also need to avoid LVM because that itty bitty overhead is still too much.
-
@scottalanmiller said in Reconsidering ProxMox:
@stacksofplates said in Reconsidering ProxMox:
After all of this, I still don't get the use case for LVM backed VMs. Other than possibly, possibly a super IO heavy database. Even then, it's questionable.
That's roughly it, and yes, it remains questionable at the best of times.
In the cases where you need LVM fat, you almost certainly also need to avoid LVM because that itty bitty overhead is still too much.
Preallocated qcow2 images are 99% as fast as LVM volumes. Even with just preallocating just the metadata I've had almost native disk write speeds. You lose all of the advantages of qcow2 like libguestfs, the qemu agent, internal and external snapshots, etc.
-
@stacksofplates said in Reconsidering ProxMox:
@scottalanmiller said in Reconsidering ProxMox:
@stacksofplates said in Reconsidering ProxMox:
After all of this, I still don't get the use case for LVM backed VMs. Other than possibly, possibly a super IO heavy database. Even then, it's questionable.
That's roughly it, and yes, it remains questionable at the best of times.
In the cases where you need LVM fat, you almost certainly also need to avoid LVM because that itty bitty overhead is still too much.
Preallocated qcow2 images are 99% as fast as LVM volumes. Even with just preallocating just the metadata I've had almost native disk write speeds. You lose all of the advantages of qcow2 like libguestfs, the qemu agent, internal and external snapshots, etc.
that said, no idea how the eff you do that with ProxMox. That was just KVM.
-
@stacksofplates said in Reconsidering ProxMox:
@stacksofplates said in Reconsidering ProxMox:
@scottalanmiller said in Reconsidering ProxMox:
@stacksofplates said in Reconsidering ProxMox:
After all of this, I still don't get the use case for LVM backed VMs. Other than possibly, possibly a super IO heavy database. Even then, it's questionable.
That's roughly it, and yes, it remains questionable at the best of times.
In the cases where you need LVM fat, you almost certainly also need to avoid LVM because that itty bitty overhead is still too much.
Preallocated qcow2 images are 99% as fast as LVM volumes. Even with just preallocating just the metadata I've had almost native disk write speeds. You lose all of the advantages of qcow2 like libguestfs, the qemu agent, internal and external snapshots, etc.
that said, no idea how the eff you do that with ProxMox. That was just KVM.
It's the default actually. We use Qcow2 on LVM-Thin mostly.
-
@scottalanmiller said in Reconsidering ProxMox:
@stacksofplates said in Reconsidering ProxMox:
@stacksofplates said in Reconsidering ProxMox:
@scottalanmiller said in Reconsidering ProxMox:
@stacksofplates said in Reconsidering ProxMox:
After all of this, I still don't get the use case for LVM backed VMs. Other than possibly, possibly a super IO heavy database. Even then, it's questionable.
That's roughly it, and yes, it remains questionable at the best of times.
In the cases where you need LVM fat, you almost certainly also need to avoid LVM because that itty bitty overhead is still too much.
Preallocated qcow2 images are 99% as fast as LVM volumes. Even with just preallocating just the metadata I've had almost native disk write speeds. You lose all of the advantages of qcow2 like libguestfs, the qemu agent, internal and external snapshots, etc.
that said, no idea how the eff you do that with ProxMox. That was just KVM.
It's the default actually. We use Qcow2 on LVM-Thin mostly.
I meant the preallocation. I'd be surprised if they expose that because you can either fully preallocate and zero out the blocks, preallocate and just mark the beginning and end, or just preallocate the metadata.
-
I've been playing with Proxmox quite a bit. Still love xcp-ng but Proxmox does somethings that xcp-ng doesn't. A built in host management interface for instance is wonderful.
I do wish Proxmox supported MD. I know I can easily configure it but don't really want to do that.
-
@stacksofplates said in Reconsidering ProxMox:
@scottalanmiller said in Reconsidering ProxMox:
@stacksofplates said in Reconsidering ProxMox:
@stacksofplates said in Reconsidering ProxMox:
@scottalanmiller said in Reconsidering ProxMox:
@stacksofplates said in Reconsidering ProxMox:
After all of this, I still don't get the use case for LVM backed VMs. Other than possibly, possibly a super IO heavy database. Even then, it's questionable.
That's roughly it, and yes, it remains questionable at the best of times.
In the cases where you need LVM fat, you almost certainly also need to avoid LVM because that itty bitty overhead is still too much.
Preallocated qcow2 images are 99% as fast as LVM volumes. Even with just preallocating just the metadata I've had almost native disk write speeds. You lose all of the advantages of qcow2 like libguestfs, the qemu agent, internal and external snapshots, etc.
that said, no idea how the eff you do that with ProxMox. That was just KVM.
It's the default actually. We use Qcow2 on LVM-Thin mostly.
I meant the preallocation. I'd be surprised if they expose that because you can either fully preallocate and zero out the blocks, preallocate and just mark the beginning and end, or just preallocate the metadata.
Gotcha. Prob not.
-
@biggen said in Reconsidering ProxMox:
I've been playing with Proxmox quite a bit. Still love xcp-ng but Proxmox does somethings that xcp-ng doesn't. A built in host management interface for instance is wonderful.
I do wish Proxmox supported MD. I know I can easily configure it but don't really want to do that.
So far we have been decently happy. Some bizarre quirky interfaces and non obvious processes. But so much included.
-
@VoIP_n00b said in Reconsidering ProxMox:
@scottalanmiller said in Reconsidering ProxMox:
We do, shouldn't, but we do because customers don't want to pay for better backups.
I was referring to this ^ Snapshots are not a backup, just like RAID is not a backup.
When you use Proxmox backup feature, you end up creating a snapshot before it creates a backup. The same goes when creating a clone. You’ll have the option to create a clone from a snapshot if available.
-
@stacksofplates said in Reconsidering ProxMox:
After all of this, I still don't get the use case for LVM backed VMs. Other than possibly, possibly a super IO heavy database. Even then, it's questionable.
I was influenced by this video.
Youtube Video -
@scottalanmiller Are you installing these at customer locations? Since you aren't using ZFS with Proxmox are you doing hardware RAID and then using LVM backed storage on top of that for customers?
-
@biggen said in Reconsidering ProxMox:
Since you aren't using ZFS with Proxmox are you doing hardware RAID and then using LVM backed storage on top of that for customers?
That's typically what we do, yes. Most SMB scale systems today have enterprise hardware RAID and it makes blind drive swaps on site far safer because you can have remote hands or vendor techs swap the drives for you.
-
-
Thanks @scottalanmiller. This is for my home system so I guess I'll just run with a ZFS mirror since no HW Raid on this machine.