Reconsidering ProxMox
-
@biggen said in Reconsidering ProxMox:
@scottalanmiller said in Reconsidering ProxMox:
@biggen said in Reconsidering ProxMox:
@scottalanmiller What’s your storage configuration like?
I’ve been playing with it on ZFS Raid 1 mirror. Proxmox OS and VMs all on same mirror. Performance is “OK”. Not as good as MD with same setup though.
Wonder if it’s better to create separate Raid 1 ZFS pools. One for the Proxmox OS and one for the VMs.
We don't use ZFS - slow and we don't want its features (few actually do.) LVM is what we use. What is making you want to look at ZFS? It's not meant for speed and has little generally purpose these days. It's not bad, but mostly it's deployed by accident when people aren't sure what it is. Then people swear by "features" that everything has thinking they are unique to ZFS.
ZFS is a great system, with niche applicability.
I wanted to just mirror a SSD pair but thought the only way to "officially" to that with Proxmox was ZFS since they don't support MD.
Ah okay. That makes sense. In that case, you are free to disable all that ZFS stuff that causes problems and just treat it like any normal system. Then it won't use all that bloated RAM.
But that's really good to note that MD isn't officially supported by the Proxmox layer. Not a big deal, as it is just Debian under there and it is officially supported by Debian. So you can use it and Proxmox doesn't care. But if you are paying for support, you'd definitely want to stick to ZFS RAID so that you have a throat to choke under contract.
I think either way is fine, just be aware that ZFS requires a lot more manual work from you to get running well. But if you are going to set up MD manually, it might be still easier to let PM manage ZFS and just tune it by hand, rather than installing and setting up MD by hand.
-
@stacksofplates said in Reconsidering ProxMox:
After all of this, I still don't get the use case for LVM backed VMs. Other than possibly, possibly a super IO heavy database. Even then, it's questionable.
That's roughly it, and yes, it remains questionable at the best of times.
In the cases where you need LVM fat, you almost certainly also need to avoid LVM because that itty bitty overhead is still too much.
-
@scottalanmiller said in Reconsidering ProxMox:
@stacksofplates said in Reconsidering ProxMox:
After all of this, I still don't get the use case for LVM backed VMs. Other than possibly, possibly a super IO heavy database. Even then, it's questionable.
That's roughly it, and yes, it remains questionable at the best of times.
In the cases where you need LVM fat, you almost certainly also need to avoid LVM because that itty bitty overhead is still too much.
Preallocated qcow2 images are 99% as fast as LVM volumes. Even with just preallocating just the metadata I've had almost native disk write speeds. You lose all of the advantages of qcow2 like libguestfs, the qemu agent, internal and external snapshots, etc.
-
@stacksofplates said in Reconsidering ProxMox:
@scottalanmiller said in Reconsidering ProxMox:
@stacksofplates said in Reconsidering ProxMox:
After all of this, I still don't get the use case for LVM backed VMs. Other than possibly, possibly a super IO heavy database. Even then, it's questionable.
That's roughly it, and yes, it remains questionable at the best of times.
In the cases where you need LVM fat, you almost certainly also need to avoid LVM because that itty bitty overhead is still too much.
Preallocated qcow2 images are 99% as fast as LVM volumes. Even with just preallocating just the metadata I've had almost native disk write speeds. You lose all of the advantages of qcow2 like libguestfs, the qemu agent, internal and external snapshots, etc.
that said, no idea how the eff you do that with ProxMox. That was just KVM.
-
@stacksofplates said in Reconsidering ProxMox:
@stacksofplates said in Reconsidering ProxMox:
@scottalanmiller said in Reconsidering ProxMox:
@stacksofplates said in Reconsidering ProxMox:
After all of this, I still don't get the use case for LVM backed VMs. Other than possibly, possibly a super IO heavy database. Even then, it's questionable.
That's roughly it, and yes, it remains questionable at the best of times.
In the cases where you need LVM fat, you almost certainly also need to avoid LVM because that itty bitty overhead is still too much.
Preallocated qcow2 images are 99% as fast as LVM volumes. Even with just preallocating just the metadata I've had almost native disk write speeds. You lose all of the advantages of qcow2 like libguestfs, the qemu agent, internal and external snapshots, etc.
that said, no idea how the eff you do that with ProxMox. That was just KVM.
It's the default actually. We use Qcow2 on LVM-Thin mostly.
-
@scottalanmiller said in Reconsidering ProxMox:
@stacksofplates said in Reconsidering ProxMox:
@stacksofplates said in Reconsidering ProxMox:
@scottalanmiller said in Reconsidering ProxMox:
@stacksofplates said in Reconsidering ProxMox:
After all of this, I still don't get the use case for LVM backed VMs. Other than possibly, possibly a super IO heavy database. Even then, it's questionable.
That's roughly it, and yes, it remains questionable at the best of times.
In the cases where you need LVM fat, you almost certainly also need to avoid LVM because that itty bitty overhead is still too much.
Preallocated qcow2 images are 99% as fast as LVM volumes. Even with just preallocating just the metadata I've had almost native disk write speeds. You lose all of the advantages of qcow2 like libguestfs, the qemu agent, internal and external snapshots, etc.
that said, no idea how the eff you do that with ProxMox. That was just KVM.
It's the default actually. We use Qcow2 on LVM-Thin mostly.
I meant the preallocation. I'd be surprised if they expose that because you can either fully preallocate and zero out the blocks, preallocate and just mark the beginning and end, or just preallocate the metadata.
-
I've been playing with Proxmox quite a bit. Still love xcp-ng but Proxmox does somethings that xcp-ng doesn't. A built in host management interface for instance is wonderful.
I do wish Proxmox supported MD. I know I can easily configure it but don't really want to do that.
-
@stacksofplates said in Reconsidering ProxMox:
@scottalanmiller said in Reconsidering ProxMox:
@stacksofplates said in Reconsidering ProxMox:
@stacksofplates said in Reconsidering ProxMox:
@scottalanmiller said in Reconsidering ProxMox:
@stacksofplates said in Reconsidering ProxMox:
After all of this, I still don't get the use case for LVM backed VMs. Other than possibly, possibly a super IO heavy database. Even then, it's questionable.
That's roughly it, and yes, it remains questionable at the best of times.
In the cases where you need LVM fat, you almost certainly also need to avoid LVM because that itty bitty overhead is still too much.
Preallocated qcow2 images are 99% as fast as LVM volumes. Even with just preallocating just the metadata I've had almost native disk write speeds. You lose all of the advantages of qcow2 like libguestfs, the qemu agent, internal and external snapshots, etc.
that said, no idea how the eff you do that with ProxMox. That was just KVM.
It's the default actually. We use Qcow2 on LVM-Thin mostly.
I meant the preallocation. I'd be surprised if they expose that because you can either fully preallocate and zero out the blocks, preallocate and just mark the beginning and end, or just preallocate the metadata.
Gotcha. Prob not.
-
@biggen said in Reconsidering ProxMox:
I've been playing with Proxmox quite a bit. Still love xcp-ng but Proxmox does somethings that xcp-ng doesn't. A built in host management interface for instance is wonderful.
I do wish Proxmox supported MD. I know I can easily configure it but don't really want to do that.
So far we have been decently happy. Some bizarre quirky interfaces and non obvious processes. But so much included.
-
@VoIP_n00b said in Reconsidering ProxMox:
@scottalanmiller said in Reconsidering ProxMox:
We do, shouldn't, but we do because customers don't want to pay for better backups.
I was referring to this ^ Snapshots are not a backup, just like RAID is not a backup.
When you use Proxmox backup feature, you end up creating a snapshot before it creates a backup. The same goes when creating a clone. You’ll have the option to create a clone from a snapshot if available.
-
@stacksofplates said in Reconsidering ProxMox:
After all of this, I still don't get the use case for LVM backed VMs. Other than possibly, possibly a super IO heavy database. Even then, it's questionable.
I was influenced by this video.
Youtube Video -
@scottalanmiller Are you installing these at customer locations? Since you aren't using ZFS with Proxmox are you doing hardware RAID and then using LVM backed storage on top of that for customers?
-
@biggen said in Reconsidering ProxMox:
Since you aren't using ZFS with Proxmox are you doing hardware RAID and then using LVM backed storage on top of that for customers?
That's typically what we do, yes. Most SMB scale systems today have enterprise hardware RAID and it makes blind drive swaps on site far safer because you can have remote hands or vendor techs swap the drives for you.
-
-
Thanks @scottalanmiller. This is for my home system so I guess I'll just run with a ZFS mirror since no HW Raid on this machine.
-
@scottalanmiller said in Reconsidering ProxMox:
@stacksofplates said in Reconsidering ProxMox:
@stacksofplates said in Reconsidering ProxMox:
@scottalanmiller said in Reconsidering ProxMox:
@stacksofplates said in Reconsidering ProxMox:
After all of this, I still don't get the use case for LVM backed VMs. Other than possibly, possibly a super IO heavy database. Even then, it's questionable.
That's roughly it, and yes, it remains questionable at the best of times.
In the cases where you need LVM fat, you almost certainly also need to avoid LVM because that itty bitty overhead is still too much.
Preallocated qcow2 images are 99% as fast as LVM volumes. Even with just preallocating just the metadata I've had almost native disk write speeds. You lose all of the advantages of qcow2 like libguestfs, the qemu agent, internal and external snapshots, etc.
that said, no idea how the eff you do that with ProxMox. That was just KVM.
It's the default actually. We use Qcow2 on LVM-Thin mostly.
Hi Scott (and everyone else),
I've been playing around with Proxmox for a week or so. I haven't used LVM thinpools before so I wanted to check if I'm making sense here. Proxmox doesn't let me put a qcow directly onto a thinpool (like the local-lvm created by default).
Do I need to create a volume group on top of the thinpool, and mount that as directory storage to be able to use qcow2 on LVM-Thin as you're doing?Cheers!
-
@Doyler3000 said in Reconsidering ProxMox:
@scottalanmiller said in Reconsidering ProxMox:
@stacksofplates said in Reconsidering ProxMox:
@stacksofplates said in Reconsidering ProxMox:
@scottalanmiller said in Reconsidering ProxMox:
@stacksofplates said in Reconsidering ProxMox:
After all of this, I still don't get the use case for LVM backed VMs. Other than possibly, possibly a super IO heavy database. Even then, it's questionable.
That's roughly it, and yes, it remains questionable at the best of times.
In the cases where you need LVM fat, you almost certainly also need to avoid LVM because that itty bitty overhead is still too much.
Preallocated qcow2 images are 99% as fast as LVM volumes. Even with just preallocating just the metadata I've had almost native disk write speeds. You lose all of the advantages of qcow2 like libguestfs, the qemu agent, internal and external snapshots, etc.
that said, no idea how the eff you do that with ProxMox. That was just KVM.
It's the default actually. We use Qcow2 on LVM-Thin mostly.
Hi Scott (and everyone else),
I've been playing around with Proxmox for a week or so. I haven't used LVM thinpools before so I wanted to check if I'm making sense here. Proxmox doesn't let me put a qcow directly onto a thinpool (like the local-lvm created by default).
Do I need to create a volume group on top of the thinpool, and mount that as directory storage to be able to use qcow2 on LVM-Thin as you're doing?Cheers!
It's easier to do on a vanilla KVM setup. Proxmox moved away from creating qcow2 for awhile now, you end up creating a raw vm disk image (logical volumes). You can import qcow2, see https://www.republicofit.com/topic/21751/import-a-qcow2-into-proxmox
-
Yeah I'm considering moving form vanilla KVM, particularly for the simplified backup and restore options. Though I haven't yet tried the new proxmox backup server that's just been released. That might make the move more compelling.
Is there a philosophy behind them moving away from creating qcow2?
The method of creating a volume group on the thinpool and creating the qcow2 files in that works for me. Just wondered if anyone had thoughts on whether that's the right thing to do.
-
@Doyler3000 said in Reconsidering ProxMox:
Is there a philosophy behind them moving away from creating qcow2?
Likely just for performance. Since it's meant to be an appliance, qcow2 doesn't offer a big advantage.
-
@Doyler3000 said in Reconsidering ProxMox:
The method of creating a volume group on the thinpool and creating the qcow2 files in that works for me. Just wondered if anyone had thoughts on whether that's the right thing to do.
Nothing wrong with that at a technical level, but makes no sense to try to work around ProxMox' mechanisms if using ProxMox.