XenServer, local storage, and redundancy/backups
-
@DustinB3403 said:
Since you have more than 2 hosts you could use the built in tools from Xen to create a highly reliable pool.
I believe @halizard might even work with more than 2 host. But their bread and butter so to speak is a 2 host setup from everything I've read.
It is build on DRBD which is really just two hosts.
-
@scottalanmiller Thank you for the clarification.
-
@scottalanmiller said:
Can you break down the hardware better? I'm unclear if you have an OpenStack computer structure AND a CEPH one or if that is all on the same hardware?
If the former, why not keep CEPH and only move the top layer from OpenStack to XS?
Also, what is driving the move away from OpenStack? Just a desire for simplicity?
I have four hosts that are all OpenStack and Ceph Nodes. Bad design on all counts. I wish they were separated. The current hardware requirements preclude that at this point. My ultimate goal is to move the storage to dedicated hardware, perhaps utilizing Ceph then, but until then I need to get a working virtualization platform.
-
As for the driving factor, it is that we're trying to simplify our infrastructure. It looked like we might be able to achieve this using Mirantis to package up OpenStack, but we're having issues getting their deployment tools to work. If I had more time to play with things I might fight it to the point of working, but we're currently running in a semi crippled state (one of the hosts removed itself from the cloud), and I need to get something up and running sooner than later, but be a longish term solution. We don't really need a private cloud. It is a convenience mostly, and at this point it appears to not be worth the overhead to setup and maintain.
-
@Kelly said:
@scottalanmiller said:
Can you break down the hardware better? I'm unclear if you have an OpenStack computer structure AND a CEPH one or if that is all on the same hardware?
If the former, why not keep CEPH and only move the top layer from OpenStack to XS?
Also, what is driving the move away from OpenStack? Just a desire for simplicity?
I have four hosts that are all OpenStack and Ceph Nodes. Bad design on all counts. I wish they were separated.
Why do you feel the need to separate the storage and compute? What business reason exists to justify the added cost and management headache?
I get that you currently have a management headache, and I do like the idea of moving to something more reliable. XenServer with halizard and XenOrchestra would be a great drop-in replacement, it's what I'm migrating to at least.
-
@Kelly said:
@scottalanmiller said:
Can you break down the hardware better? I'm unclear if you have an OpenStack computer structure AND a CEPH one or if that is all on the same hardware?
If the former, why not keep CEPH and only move the top layer from OpenStack to XS?
Also, what is driving the move away from OpenStack? Just a desire for simplicity?
I have four hosts that are all OpenStack and Ceph Nodes. Bad design on all counts. I wish they were separated. The current hardware requirements preclude that at this point. My ultimate goal is to move the storage to dedicated hardware, perhaps utilizing Ceph then, but until then I need to get a working virtualization platform.
Gotcha, okay. was hoping that the CEPH infrastructure could remain. But I guess not.
-
@Kelly said:
As for the driving factor, it is that we're trying to simplify our infrastructure. It looked like we might be able to achieve this using Mirantis to package up OpenStack, but we're having issues getting their deployment tools to work. If I had more time to play with things I might fight it to the point of working, but we're currently running in a semi crippled state (one of the hosts removed itself from the cloud), and I need to get something up and running sooner than later, but be a longish term solution. We don't really need a private cloud. It is a convenience mostly, and at this point it appears to not be worth the overhead to setup and maintain.
I totally get that private cloud isn't likely to make sense with just four nodes, that's pretty crazy
Was just thinking of what might be easy going forward.
-
Do you really need HA? HA adds complication. Although there is an option here... with four nodes you could do TWO HA-Lizard clusters and I think put it all under XO for a single pane of glass. Not as nice as a single four node cluster, but free and works with what you have, more or less.
-
The mismatched local drives will pose a challenge for local RAID. You can use them, but you will get crippled performance.
-
@scottalanmiller said:
The mismatched local drives will pose a challenge for local RAID. You can use them, but you will get crippled performance.
It looks like there is an onboard 8 port LSI SAS controller in them, so I might be able to do RAID. My desire for HA is not for HA necessarily, but more survivability in the case of a single drive failing if I don't run any kind of RAID so as not to lose storage capacity.
-
@Kelly said:
It looks like there is an onboard 8 port LSI SAS controller in them, so I might be able to do RAID. My desire for HA is not for HA necessarily, but more survivability in the case of a single drive failing if I don't run any kind of RAID so as not to lose storage capacity.
You are going to lose storage capacity to either RAID or RAIN. Can't do any sort of failover without losing capacity. The simplest thing, if you are okay with it, would be to do RAID 6 or RAID 10 (depending on the capacity that you are willing to lose) using MD software RAID and not do HA but just run each machine individually. Use XO to manage them all as a pool.
-
@Kelly You would have to have a SAS controller of some type in there for the drives that are attached for CEPH now.
-
@Reid-Cooper said:
@Kelly You would have to have a SAS controller of some type in there for the drives that are attached for CEPH now.
There is an onboard controller, but it isn't running any RAID configuration that I can tell.
-
@Kelly said:
There is an onboard controller, but it isn't running any RAID configuration that I can tell.
It would not be for CEPH. CEPH is a RAIN system, there would be no RAID. But what it was doing isn't an issue. What we care about going forward is what we can do. The SAS controller has the drives attached and that's all that we would care about when looking at the software RAID from MD. The SAS controller isn't what provides the RAID, it is just what attaches the drives.
-
So, to sum up, the recommendation would be to run XS independently on each of the four, configuring RAID 1 or 10, if possible, and then use XO to manage it all? Is that correct?
-
@Kelly said:
So, to sum up, the recommendation would be to run XS independently on each of the four, configuring RAID 1 or 10, if possible, and then use XO to manage it all? Is that correct?
That is what I am thinking. Or RAID 6 for more capacity. With eight drives, RAID 6 might be a decent option.
-
How much storage do you have in CEPH now? what kind of resiliency do you have today? I'm completely unfamiliar with CEPH, if you loose a drive, I'm assuming you don't loose an entire array?
-
@Dashrender said:
How much storage do you have in CEPH now? what kind of resiliency do you have today? I'm completely unfamiliar with CEPH, if you loose a drive, I'm assuming you don't loose an entire array?
CEPH is RAIN. Very advanced. Compare to Gluster, Luster, Exablox or Scale's storage.
-
@Dashrender said:
How much storage do you have in CEPH now? what kind of resiliency do you have today? I'm completely unfamiliar with CEPH, if you loose a drive, I'm assuming you don't loose an entire array?
It is a very cool tool. As @scottalanmiller said, it is a RAIN (hadn't heard that term before yesterday). Basically it is software that will write to multiple nodes (it is even slow link aware) and enable you to convert commodity hardware into a resilient storage system. I would consider keeping it if it were not able to coexist (as near as I can tell) with XS.
As for total storage it is pretty low. Each host is running at < 20 TB in absolute terms. Since Ceph requires three writes that means I'm getting quite a bit less than this on average.
-
I am pretty sure that CEPH and Xen can coexist, but I don't know about with XenServer. Doing so would likely be very awkward at best.