Proxmox install for use with a ceph cluster
-
@jt1001001 said in Proxmox install for use with a ceph cluster:
Can that controller do raid 0 with one drive to do fake jbod?
Yes, but it blocks SMART so you never want to do it, it undermines the stability of the JBOD. There's always a standard controller on the MOBO for the JBOD connections.
-
@DustinB3403 said in Proxmox install for use with a ceph cluster:
No, its also not worth bothering any more as the controller can still see the hardware controller when booted from USB.
We could have told you that. True hardware RAID cannot be bypassed, it's physically in the path, if you cut it out, the drives have to vanish.
-
@scottalanmiller said in Proxmox install for use with a ceph cluster:
@DustinB3403 said in Proxmox install for use with a ceph cluster:
No, its also not worth bothering any more as the controller can still see the hardware controller when booted from USB.
We could have told you that. True hardware RAID cannot be bypassed, it's physically in the path, if you cut it out, the drives have to vanish.
Your snarky remarks aren't helping today.
-
@DustinB3403 said in Proxmox install for use with a ceph cluster:
@scottalanmiller said in Proxmox install for use with a ceph cluster:
@DustinB3403 said in Proxmox install for use with a ceph cluster:
No, its also not worth bothering any more as the controller can still see the hardware controller when booted from USB.
We could have told you that. True hardware RAID cannot be bypassed, it's physically in the path, if you cut it out, the drives have to vanish.
Your snarky remarks aren't helping today.
Not meant to be snarky, just explaining how it works. This way you don't have to test different scenarios, because just knowing it is hardware RAID tells you what you need to know. And knowing that you can remove the card completely and get the JBOD functionality you are seeking, is key.
-
@scottalanmiller I can remove the card for sure, but its not a practical lab exercise for what I'm working on.
I would do this in my personal lab possibly to do that, but not here, in this lab.
-
@DustinB3403 said in Proxmox install for use with a ceph cluster:
@scottalanmiller I can remove the card for sure, but its not a practical lab exercise for what I'm working on.
I would do this in my personal lab possibly to do that, but not here, in this lab.
You can just the change the controller to HBA mode. In HBA mode it will work like a HBA.
On older cards you have to flash the firmware, on newer cards it's often just a setting.
From a hardware perspective a RAID card is a HBA + more powerful hardware for parity calcs + larger memory cache.Hang on a sec see if I'll find the link on how to do it.
A newer way to set HP controller to HBA mode:
https://ahelpme.com/servers/hewlett-packard/smart-array-p440-enable-or-disable-hba-mode-using-smart-storage-administrator/This is a older longer way to do it:
Youtube Video -
@Pete-S said in Proxmox install for use with a ceph cluster:
@DustinB3403 said in Proxmox install for use with a ceph cluster:
@scottalanmiller I can remove the card for sure, but its not a practical lab exercise for what I'm working on.
I would do this in my personal lab possibly to do that, but not here, in this lab.
You can just the change the controller to HBA mode. In HBA mode it will work like a HBA.
On older cards you have to flash the firmware, on newer cards it's often just a setting.
From a hardware perspective a RAID card is a HBA + more powerful hardware for parity calcs + larger memory cache.Hang on a sec see if I'll find the link on how to do it.
A newer way to set HP controller to HBA mode:
https://ahelpme.com/servers/hewlett-packard/smart-array-p440-enable-or-disable-hba-mode-using-smart-storage-administrator/This is a older longer way to do it:
Youtube VideoI was just about to reply to turn passthrough mode (HBA) on on the controller but you nailed it! On the other hand, Proxmox works fine with hardware RAID. As a matter of fact, this is what the vendor themselves recommend: https://pve.proxmox.com/wiki/Raid_controller. Software ZFS RAID can potentially be faster but it needs to be configured properly with direct access to disks, plenty of RAM and ZIL for caching.
-
@taurex said in Proxmox install for use with a ceph cluster:
On the other hand, Proxmox works fine with hardware RAID. As a matter of fact, this is what the vendor themselves recommend: https://pve.proxmox.com/wiki/Raid_controller. Software ZFS RAID can potentially be faster but it needs to be configured properly with direct access to disks, plenty of RAM and ZIL for caching.
That's possible. I haven't played with it yet so I don't know.
-
@scottalanmiller said in Proxmox install for use with a ceph cluster:
Because it HAS to default to SOMETHING. If you wanted anything, you'd have selected it. So they default to what is safest and most common. Why pay for a hardware controller if you didn't have a use for it? The key features of a hardware controller are disabled with R0.
HPE doesn't sell an actual pass through HBA. All their HBA devices are duel use parts. (To be fair, the equivalent line from Broadcom/Avago like the 3008 in theory can be sold with a RAID 1 no cache support, but plenty of OEM's like Dell sell Pure HBA firmware (Sometimes called The IT firmware) such as the HBA 330. Now the Gen9's might of still offered controllers from both ODMs (by Gen10 though that was gone, and I'm pretty sure it was a lot earlier like gen7 where HPE last used Avago parts).
-
@scottalanmiller said in Proxmox install for use with a ceph cluster:
We could have told you that. True hardware RAID cannot be bypassed, it's physically in the path, if you cut it out, the drives have to vanish.
Some RAID controllers have a way of running pass through. For the 3108 based cards (Example P730) there is a pass through. Note historically it's been "pretty damn buggy", and it took a lot of joint engineering to get it stable enough for our purposes (don't even dare try it with the 2208 based 6Gbps Avago parts). Now this is kinda moot as everyone's using 3008 pure HBA firmware parts (We stopped certifying RAID controllers from Broadcom for pass through) and the other thing that's making it moot is NVMe running "proper" talks directly to the PCI-E bus. There are "Tri-Mode" RAID controllers that can raid NVMe but I just don't see a point. You bottleneck throughput pretty hard pretty fast.
-
@scottalanmiller said in Proxmox install for use with a ceph cluster:
Yes, but it blocks SMART so you never want to do it, it undermines the stability of the JBOD. There's always a standard controller on the MOBO for the JBOD connections.
Also blocks TRIM commands (not that I trust the Linux TRIM driver to ATA drives given how many one off exceptions to disable they've had to write).
Operationally it's messy because on a drive failure you have to go in with PERCLI etc, and rebuild the RAID 0s. We used to run this but it was just a royal pain in the ass.