ML
    • Recent
    • Categories
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    XenServer, local storage, and redundancy/backups

    IT Discussion
    xenserver backup redundancy
    7
    40
    8.6k
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • scottalanmillerS
      scottalanmiller
      last edited by

      The mismatched local drives will pose a challenge for local RAID. You can use them, but you will get crippled performance.

      KellyK 1 Reply Last reply Reply Quote 0
      • KellyK
        Kelly @scottalanmiller
        last edited by

        @scottalanmiller said:

        The mismatched local drives will pose a challenge for local RAID. You can use them, but you will get crippled performance.

        It looks like there is an onboard 8 port LSI SAS controller in them, so I might be able to do RAID. My desire for HA is not for HA necessarily, but more survivability in the case of a single drive failing if I don't run any kind of RAID so as not to lose storage capacity.

        scottalanmillerS Reid CooperR 2 Replies Last reply Reply Quote 1
        • scottalanmillerS
          scottalanmiller @Kelly
          last edited by

          @Kelly said:

          It looks like there is an onboard 8 port LSI SAS controller in them, so I might be able to do RAID. My desire for HA is not for HA necessarily, but more survivability in the case of a single drive failing if I don't run any kind of RAID so as not to lose storage capacity.

          You are going to lose storage capacity to either RAID or RAIN. Can't do any sort of failover without losing capacity. The simplest thing, if you are okay with it, would be to do RAID 6 or RAID 10 (depending on the capacity that you are willing to lose) using MD software RAID and not do HA but just run each machine individually. Use XO to manage them all as a pool.

          1 Reply Last reply Reply Quote 1
          • Reid CooperR
            Reid Cooper @Kelly
            last edited by

            @Kelly You would have to have a SAS controller of some type in there for the drives that are attached for CEPH now.

            KellyK 1 Reply Last reply Reply Quote 0
            • KellyK
              Kelly @Reid Cooper
              last edited by

              @Reid-Cooper said:

              @Kelly You would have to have a SAS controller of some type in there for the drives that are attached for CEPH now.

              There is an onboard controller, but it isn't running any RAID configuration that I can tell.

              scottalanmillerS 1 Reply Last reply Reply Quote 0
              • scottalanmillerS
                scottalanmiller @Kelly
                last edited by

                @Kelly said:

                There is an onboard controller, but it isn't running any RAID configuration that I can tell.

                It would not be for CEPH. CEPH is a RAIN system, there would be no RAID. But what it was doing isn't an issue. What we care about going forward is what we can do. The SAS controller has the drives attached and that's all that we would care about when looking at the software RAID from MD. The SAS controller isn't what provides the RAID, it is just what attaches the drives.

                1 Reply Last reply Reply Quote 0
                • KellyK
                  Kelly
                  last edited by

                  So, to sum up, the recommendation would be to run XS independently on each of the four, configuring RAID 1 or 10, if possible, and then use XO to manage it all? Is that correct?

                  scottalanmillerS 1 Reply Last reply Reply Quote 1
                  • scottalanmillerS
                    scottalanmiller @Kelly
                    last edited by

                    @Kelly said:

                    So, to sum up, the recommendation would be to run XS independently on each of the four, configuring RAID 1 or 10, if possible, and then use XO to manage it all? Is that correct?

                    That is what I am thinking. Or RAID 6 for more capacity. With eight drives, RAID 6 might be a decent option.

                    1 Reply Last reply Reply Quote 0
                    • DashrenderD
                      Dashrender
                      last edited by

                      How much storage do you have in CEPH now? what kind of resiliency do you have today? I'm completely unfamiliar with CEPH, if you loose a drive, I'm assuming you don't loose an entire array?

                      scottalanmillerS KellyK 2 Replies Last reply Reply Quote 0
                      • scottalanmillerS
                        scottalanmiller @Dashrender
                        last edited by

                        @Dashrender said:

                        How much storage do you have in CEPH now? what kind of resiliency do you have today? I'm completely unfamiliar with CEPH, if you loose a drive, I'm assuming you don't loose an entire array?

                        CEPH is RAIN. Very advanced. Compare to Gluster, Luster, Exablox or Scale's storage.

                        1 Reply Last reply Reply Quote 1
                        • KellyK
                          Kelly @Dashrender
                          last edited by

                          @Dashrender said:

                          How much storage do you have in CEPH now? what kind of resiliency do you have today? I'm completely unfamiliar with CEPH, if you loose a drive, I'm assuming you don't loose an entire array?

                          It is a very cool tool. As @scottalanmiller said, it is a RAIN (hadn't heard that term before yesterday). Basically it is software that will write to multiple nodes (it is even slow link aware) and enable you to convert commodity hardware into a resilient storage system. I would consider keeping it if it were not able to coexist (as near as I can tell) with XS.

                          As for total storage it is pretty low. Each host is running at < 20 TB in absolute terms. Since Ceph requires three writes that means I'm getting quite a bit less than this on average.

                          1 Reply Last reply Reply Quote 1
                          • StrongBadS
                            StrongBad
                            last edited by

                            I am pretty sure that CEPH and Xen can coexist, but I don't know about with XenServer. Doing so would likely be very awkward at best.

                            travisdh1T 1 Reply Last reply Reply Quote 1
                            • travisdh1T
                              travisdh1 @StrongBad
                              last edited by

                              @StrongBad said:

                              I am pretty sure that CEPH and Xen can coexist, but I don't know about with XenServer. Doing so would likely be very awkward at best.

                              Possibly. Now I want to go experiment with XenServer and CEPH.

                              scottalanmillerS KellyK 2 Replies Last reply Reply Quote 0
                              • scottalanmillerS
                                scottalanmiller @travisdh1
                                last edited by

                                @travisdh1 said:

                                @StrongBad said:

                                I am pretty sure that CEPH and Xen can coexist, but I don't know about with XenServer. Doing so would likely be very awkward at best.

                                Possibly. Now I want to go experiment with XenServer and CEPH.

                                Would make for a fun project.

                                travisdh1T 1 Reply Last reply Reply Quote 0
                                • KellyK
                                  Kelly @travisdh1
                                  last edited by

                                  @travisdh1 said:

                                  @StrongBad said:

                                  I am pretty sure that CEPH and Xen can coexist, but I don't know about with XenServer. Doing so would likely be very awkward at best.

                                  Possibly. Now I want to go experiment with XenServer and CEPH.

                                  The most recent articles that I can find about it talk about the ability to make XS a Ceph Client, but not necessarily a Ceph node. This is the direction I'd like to go long term with our storage situation. Get three whitebox servers with a lot of storage (relative to how much I have had) and run Ceph on them to present a back end for XS.

                                  1 Reply Last reply Reply Quote 1
                                  • scottalanmillerS
                                    scottalanmiller
                                    last edited by

                                    What kind of workload do you run? Mostly Linux, Windows, etc? You have four nodes today, right? Anything keeping you from dropping to fewer?

                                    KellyK 1 Reply Last reply Reply Quote 0
                                    • travisdh1T
                                      travisdh1 @scottalanmiller
                                      last edited by

                                      @scottalanmiller said:

                                      @travisdh1 said:

                                      @StrongBad said:

                                      I am pretty sure that CEPH and Xen can coexist, but I don't know about with XenServer. Doing so would likely be very awkward at best.

                                      Possibly. Now I want to go experiment with XenServer and CEPH.

                                      Would make for a fun project.

                                      After I start the next upload I'll have some time. Very experimental as I think I'm going to fire up a XenServer instance in VirtualBox with like 10 10GB HDD, see where I end up and if I finish it before the upload completes.

                                      1 Reply Last reply Reply Quote 1
                                      • KellyK
                                        Kelly @scottalanmiller
                                        last edited by

                                        @scottalanmiller said:

                                        What kind of workload do you run? Mostly Linux, Windows, etc? You have four nodes today, right? Anything keeping you from dropping to fewer?

                                        On these hosts it is all Linux. It is mostly processor and memory intensive compute processes with not a lot of storage required at this point. I'm shooting to start out with just two hosts initially and see if, with better management and transparency, we can manage with the two newer hosts and leave the other two for testing or other duties.

                                        1 Reply Last reply Reply Quote 1
                                        • scottalanmillerS
                                          scottalanmiller
                                          last edited by

                                          If you can get down to two, then you can go for bigger hosts down the road. Start with two socket hosts if you want, but you can go to four socket hosts to get double the density without getting more nodes. This allows you to do more and more to stay with less management. Going to CEPH only makes sense if you are going to a lot of nodes. It's worth a lot to go to fewer. Since Linux has no licensing complications from having lots of CPUs like Windows does, you get that extra benefit for "free".

                                          KellyK 1 Reply Last reply Reply Quote 1
                                          • KellyK
                                            Kelly @scottalanmiller
                                            last edited by

                                            @scottalanmiller said:

                                            If you can get down to two, then you can go for bigger hosts down the road. Start with two socket hosts if you want, but you can go to four socket hosts to get double the density without getting more nodes. This allows you to do more and more to stay with less management. Going to CEPH only makes sense if you are going to a lot of nodes. It's worth a lot to go to fewer. Since Linux has no licensing complications from having lots of CPUs like Windows does, you get that extra benefit for "free".

                                            They are already quad socket motherboards, so I have that going for me...

                                            At this point I have zero visibility into what our actual workloads are because of the version of OpenStack Cloud we're running on, so I'm going in a bit blind. That is why I'm going to try to just run two hosts and add as necessary.

                                            My reasoning for looking at Ceph in the long run is that I'd like to centralize all of our storage. We currently have an Oracle (Sun) NAS that is very expensive to maintain, and is a single point of failure. This is where all of our critical data is stored (not my design). It is also the backend for some of the VMs running in our existing cloud.

                                            1 Reply Last reply Reply Quote 0
                                            • 1
                                            • 2
                                            • 2 / 2
                                            • First post
                                              Last post