Proxmox install for use with a ceph cluster
-
Anyways I'm now installing to a 32GB USB drive just to test and see how it all works.
-
@DustinB3403 said in Proxmox install for use with a ceph cluster:
Turns out the P420i defaults to setting up a R1 for you if you don't configure it as a "nicety". But if I don't want a R1 why would it do that?!
Maybe I want an R0.
Because it HAS to default to SOMETHING. If you wanted anything, you'd have selected it. So they default to what is safest and most common. Why pay for a hardware controller if you didn't have a use for it? The key features of a hardware controller are disabled with R0.
-
@scottalanmiller said in Proxmox install for use with a ceph cluster:
@DustinB3403 said in Proxmox install for use with a ceph cluster:
Turns out the P420i defaults to setting up a R1 for you if you don't configure it as a "nicety". But if I don't want a R1 why would it do that?!
Maybe I want an R0.
Because it HAS to default to SOMETHING. If you wanted anything, you'd have selected it. So they default to what is safest and most common. Why pay for a hardware controller if you didn't have a use for it? The key features of a hardware controller are disabled with R0.
Not when I've expressly wiped the configuration, on purpose. The system is actively attempting to bypass the settings I had configured.
The argument of "why spend money on expensive hardware" also doesn't hold up because this equipment is all lab equipment that has no cost to acquire. Pulled it out of a decom and used it for this purpose.
-
@DustinB3403 said in Proxmox install for use with a ceph cluster:
Not when I've expressly wiped the configuration, on purpose. The system is actively attempting to bypass the settings I had configured.
It can't, it only does that if you forget to configure it. If you configure for RAID 0, it will never go to RAID 1. but if you wipe it and force it to choose a default, that's the same as choose RAID 1.
-
@DustinB3403 said in Proxmox install for use with a ceph cluster:
The argument of "why spend money on expensive hardware" also doesn't hold up because this equipment is all lab equipment that has no cost to acquire. Pulled it out of a decom and used it for this purpose.
The system is designed around paying customers, not people receiving the hardware later.
-
@scottalanmiller said in Proxmox install for use with a ceph cluster:
@DustinB3403 said in Proxmox install for use with a ceph cluster:
The argument of "why spend money on expensive hardware" also doesn't hold up because this equipment is all lab equipment that has no cost to acquire. Pulled it out of a decom and used it for this purpose.
The system is designed around paying customers, not people receiving the hardware later.
Of course, but when any administrator specifically goes into the controller and wipes the configuration and tries to boot the system, it immediately attempts to go back and recreate the very same array.
If I wanted an HP Server using JBOD that should also be an option, regardless if the system has hardware raid.
-
Which it is an option, but you have to fud around with the controller at start-up
-
@DustinB3403 said in Proxmox install for use with a ceph cluster:
Of course, but when any administrator specifically goes into the controller and wipes the configuration and tries to boot the system, it immediately attempts to go back and recreate the very same array.
Only if you don't make something else. it has to do "something", no matter what, there has to be some configuration. If it did anything else, we'd still be having this conversation. You wiped it, but didn't configure it, so it was in a situation of having to make a "judgement call" to try to help you and it made what is, far and away, the only reasonable choice other than stopping the boot completely and forcing you to manually decide - which given that you had already opted out of that, isn't a great choice.
-
@DustinB3403 said in Proxmox install for use with a ceph cluster:
If I wanted an HP Server using JBOD that should also be an option, regardless if the system has hardware raid.
That's a different discussion. That controller explicitly doesn't offer JBOD at all. By keeping the hardware in place, you have informed the hardware not to allow JBOD. If JBOD was your goal (which is totally different than wanting RAID 0), then wiping the controller isn't the right action, removing it is. It's a RAID controller, it's one purpose is to avoid JBOD.
-
@DustinB3403 said in Proxmox install for use with a ceph cluster:
Which it is an option, but you have to fud around with the controller at start-up
It's not, you only mimic JBOD in a bad way. It's not safe, you should remove the controller for better safety. but why?
-
Can that controller do raid 0 with one drive to do fake jbod?
-
@jt1001001 said in Proxmox install for use with a ceph cluster:
Can that controller do raid 0 with one drive to do fake jbod?
No, its also not worth bothering any more as the controller can still see the hardware controller when booted from USB.
So this lab experiment is over.
-
@jt1001001 said in Proxmox install for use with a ceph cluster:
Can that controller do raid 0 with one drive to do fake jbod?
Yes, but it blocks SMART so you never want to do it, it undermines the stability of the JBOD. There's always a standard controller on the MOBO for the JBOD connections.
-
@DustinB3403 said in Proxmox install for use with a ceph cluster:
No, its also not worth bothering any more as the controller can still see the hardware controller when booted from USB.
We could have told you that. True hardware RAID cannot be bypassed, it's physically in the path, if you cut it out, the drives have to vanish.
-
@scottalanmiller said in Proxmox install for use with a ceph cluster:
@DustinB3403 said in Proxmox install for use with a ceph cluster:
No, its also not worth bothering any more as the controller can still see the hardware controller when booted from USB.
We could have told you that. True hardware RAID cannot be bypassed, it's physically in the path, if you cut it out, the drives have to vanish.
Your snarky remarks aren't helping today.
-
@DustinB3403 said in Proxmox install for use with a ceph cluster:
@scottalanmiller said in Proxmox install for use with a ceph cluster:
@DustinB3403 said in Proxmox install for use with a ceph cluster:
No, its also not worth bothering any more as the controller can still see the hardware controller when booted from USB.
We could have told you that. True hardware RAID cannot be bypassed, it's physically in the path, if you cut it out, the drives have to vanish.
Your snarky remarks aren't helping today.
Not meant to be snarky, just explaining how it works. This way you don't have to test different scenarios, because just knowing it is hardware RAID tells you what you need to know. And knowing that you can remove the card completely and get the JBOD functionality you are seeking, is key.
-
@scottalanmiller I can remove the card for sure, but its not a practical lab exercise for what I'm working on.
I would do this in my personal lab possibly to do that, but not here, in this lab.
-
@DustinB3403 said in Proxmox install for use with a ceph cluster:
@scottalanmiller I can remove the card for sure, but its not a practical lab exercise for what I'm working on.
I would do this in my personal lab possibly to do that, but not here, in this lab.
You can just the change the controller to HBA mode. In HBA mode it will work like a HBA.
On older cards you have to flash the firmware, on newer cards it's often just a setting.
From a hardware perspective a RAID card is a HBA + more powerful hardware for parity calcs + larger memory cache.Hang on a sec see if I'll find the link on how to do it.
A newer way to set HP controller to HBA mode:
https://ahelpme.com/servers/hewlett-packard/smart-array-p440-enable-or-disable-hba-mode-using-smart-storage-administrator/This is a older longer way to do it:
Youtube Video -
@Pete-S said in Proxmox install for use with a ceph cluster:
@DustinB3403 said in Proxmox install for use with a ceph cluster:
@scottalanmiller I can remove the card for sure, but its not a practical lab exercise for what I'm working on.
I would do this in my personal lab possibly to do that, but not here, in this lab.
You can just the change the controller to HBA mode. In HBA mode it will work like a HBA.
On older cards you have to flash the firmware, on newer cards it's often just a setting.
From a hardware perspective a RAID card is a HBA + more powerful hardware for parity calcs + larger memory cache.Hang on a sec see if I'll find the link on how to do it.
A newer way to set HP controller to HBA mode:
https://ahelpme.com/servers/hewlett-packard/smart-array-p440-enable-or-disable-hba-mode-using-smart-storage-administrator/This is a older longer way to do it:
Youtube VideoI was just about to reply to turn passthrough mode (HBA) on on the controller but you nailed it! On the other hand, Proxmox works fine with hardware RAID. As a matter of fact, this is what the vendor themselves recommend: https://pve.proxmox.com/wiki/Raid_controller. Software ZFS RAID can potentially be faster but it needs to be configured properly with direct access to disks, plenty of RAM and ZIL for caching.
-
@taurex said in Proxmox install for use with a ceph cluster:
On the other hand, Proxmox works fine with hardware RAID. As a matter of fact, this is what the vendor themselves recommend: https://pve.proxmox.com/wiki/Raid_controller. Software ZFS RAID can potentially be faster but it needs to be configured properly with direct access to disks, plenty of RAM and ZIL for caching.
That's possible. I haven't played with it yet so I don't know.