ML
    • Recent
    • Categories
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    Configuration for Open Source Operating systems with the SAM-SD Approach

    SAM-SD
    5
    51
    11.9k
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • scottalanmillerS
      scottalanmiller
      last edited by

      Reasons 1 and 2 (OS/HV choice and blind drive swaps) represent nearly every case as to why hardware RAID is chosen and legitimately make hardware RAID a nearly ubiquitous choice in the SMB.

      1 Reply Last reply Reply Quote 1
      • DashrenderD
        Dashrender @scottalanmiller
        last edited by

        @scottalanmiller said:

        So a brief Introduction to Hardware and Software RAID.

        Pretty much you use hardware RAID when...

        • you run Windows, HyperV or ESXi on the bare metal.
        • you want blind drive swaps in a datacenter.
        • you lack the operating system and/or software RAID experience or support to use software RAID.
        • you just want a lot of convenience.

        For the relatively small cost of a RAID controller, there are a lot of reasons to use one.

        scottalanmillerS 1 Reply Last reply Reply Quote 0
        • scottalanmillerS
          scottalanmiller @Dashrender
          last edited by

          @Dashrender said:

          For the relatively small cost of a RAID controller, there are a lot of reasons to use one.

          And most people agree and buy them. It's generally a no brainer in the SMB. For me, it's reason 2. I want everyone to have blind swap. Can't overstate how valuable that is in the real world.

          1 Reply Last reply Reply Quote 2
          • scottalanmillerS
            scottalanmiller
            last edited by

            Are you thinking that you will do software RAID between cluster nodes in this example above? I'm unclear if we are looking at up to three layers of RAID (hardware then software then network?) That's getting extreme.

            1 Reply Last reply Reply Quote 0
            • G
              GotiTServicesInc
              last edited by

              I wasn't sure how far you would really need to take for mission critical data. I was assuming that at some point you only want the data mirrored once. Whether that was mirrored at the drive level or at the system level I wasn't sure. I'm assuming if you have huge storage arrays you'd want to have two SANs with JBOD Arrays, mirrored across each other? or would you want to have a RAID6 setup in each SAN and then each SAN mirrored to each other? this way (like we talked about earlier) you can have a few drives go bad and not have to rebuild the entire array. In my mind the second way (with RAID 6) seems to make the most sense?

              scottalanmillerS 1 Reply Last reply Reply Quote 0
              • scottalanmillerS
                scottalanmiller @GotiTServicesInc
                last edited by

                @GotiTServicesInc said:

                I wasn't sure how far you would really need to take for mission critical data. I was assuming that at some point you only want the data mirrored once. Whether that was mirrored at the drive level or at the system level I wasn't sure. I'm assuming if you have huge storage arrays you'd want to have two SANs with JBOD Arrays, mirrored across each other? or would you want to have a RAID6 setup in each SAN and then each SAN mirrored to each other? this way (like we talked about earlier) you can have a few drives go bad and not have to rebuild the entire array. In my mind the second way (with RAID 6) seems to make the most sense?

                No large, mission critical storage system is using RAID 0 per node. If you think about drive failure risks and that the time to rebuild a node over the network is long the risk would be insane. Say you have two 100TB SANs. If you lost a single drive on one node, you've lost 100TB of data and all redundancy. Poof, gone, one drive having failed.

                Now you failover to your second node. You now have 100TB on a SAN on RAID 0, zero redundancy!! Assuming you could replace the failed SAN in four hours and start a RAID 1 rebuild across the network and assuming that you have 8Gb/s Fibre Channel (which is 1GB/s) that would take 27.7 hours to copy back over assuming zero overhead from the disks, zero overhead from the network protocol, zero delays and absolutely no one accessing the SAN in any way during that 27.7 hour period. Any reads will slow down the network, any writes will slow down the network and require additional data be transferred.

                So a theoretical best case scenario restore is 32 hours during which you have 100TB sitting on RAID 0 without anything to protect you from the slightest drive blip. And suddenly you would be vulnerable to UREs too, which would be expected to happen rather often in an array that large.

                More realistically you would expect to be restoring for at least tree days if you had dedicated 8Gb/s FC and more like a week and a half if you were only on dedicated GigE iSCSI.

                That's a long time to pray a RAID 0 doesn't have an issue.

                1 Reply Last reply Reply Quote 0
                • scottalanmillerS
                  scottalanmiller
                  last edited by

                  And that rebuild scenario would come up pretty often because drives fail often when you have big numbers of them. This isn't a theoretical edge case, you would expect this to be happening a couple or a few times a year on a really large SAN cluster like this. That's 200TB of drives after redundancy. Even with big 4TB drives, that's 50 of them. They are going to fail from time to time.

                  1 Reply Last reply Reply Quote 0
                  • G
                    GotiTServicesInc
                    last edited by

                    so for a large setup like that (50 drives), would you want to do a raid 6 per 10 drives and software raid 0 them together to allow for a quicker rebuild time with more drives being able to fail simultaneously?

                    scottalanmillerS 2 Replies Last reply Reply Quote 0
                    • scottalanmillerS
                      scottalanmiller
                      last edited by

                      With a100TB array, even RAID 6 is getting to be a bit risky. A rebuild could take an extremely long time. A RAID 6(1) is really the riskiest thing that I would consider for something that is pretty important and RAID 10 would remain the choice for ultra mission critical.

                      1 Reply Last reply Reply Quote 1
                      • scottalanmillerS
                        scottalanmiller @GotiTServicesInc
                        last edited by

                        @GotiTServicesInc said:

                        so for a large setup like that (50 drives), would you want to do a raid 6 per 10 drives and software raid 0 them together to allow for a quicker rebuild time with more drives being able to fail simultaneously?

                        You never mix hardware and software RAID, not in the real world. That's purely a theoretical thing. Of course you can, but no enterprise would do this. Use all one or all the other.

                        1 Reply Last reply Reply Quote 1
                        • scottalanmillerS
                          scottalanmiller @GotiTServicesInc
                          last edited by

                          @GotiTServicesInc said:

                          so for a large setup like that (50 drives), would you want to do a raid 6 per 10 drives and software raid 0 them together to allow for a quicker rebuild time with more drives being able to fail simultaneously?

                          You would either accept the risks of RAID 6, go for full RAID 10 or blend the two with RAID 60. At 25 drives you are just large enough to consider RAID 60 in a practical sense.

                          http://www.smbitjournal.com/2012/11/choosing-a-raid-level-by-drive-count/

                          1 Reply Last reply Reply Quote 1
                          • scottalanmillerS
                            scottalanmiller
                            last edited by

                            Choices with 25 drives per node with RAID 60 would be...

                            5 sets of 5 drives. That's it. Only one configuration.

                            So the RAID 60 overhead would be 15 usable drives and 10 parity drives. Not a good mix at all. You lose a ton of capacity as well as performance for only moderate additional protection.

                            1 Reply Last reply Reply Quote 1
                            • scottalanmillerS
                              scottalanmiller
                              last edited by

                              Of course to get to our capacity of 100TB we have to consider that assuming 4TB drives....

                              RAID 0: 25 drives
                              RAID 6: 27 drives
                              RAID 60/5: 35 drives
                              RAID 10: 50 drives

                              1 Reply Last reply Reply Quote 1
                              • G
                                GotiTServicesInc
                                last edited by

                                So really the only correct solution is a RAID 10 setup with mirroring over FC or iSCSI? I still feel like you'd be stuck with a whole lot of rebuild time if a drive failed in one of the arrays, although the rebuild time should be faster as long as you don't have a failure in both JBOD arrays on a single server?

                                scottalanmillerS 3 Replies Last reply Reply Quote 0
                                • DashrenderD
                                  Dashrender
                                  last edited by

                                  But in a 20 drive array, you have 10 RAID 1 drives that are RAIDed 0 over them. So a single drive failure only requires the mirroring of a single drive.

                                  scottalanmillerS 1 Reply Last reply Reply Quote 1
                                  • dafyreD
                                    dafyre
                                    last edited by

                                    If I understand all this right, I think that RAID 10 would have the fastest rebuild time since it only has to rebuild a single disk from its partner in the same server. It doesn't have to fight network congestion or anything like that quite so much.

                                    1 Reply Last reply Reply Quote 0
                                    • scottalanmillerS
                                      scottalanmiller @GotiTServicesInc
                                      last edited by

                                      @GotiTServicesInc said:

                                      So really the only correct solution is a RAID 10 setup with mirroring over FC or iSCSI?

                                      You can't really mirror over a SAN block protocol like that. In theory you can, but it is incredibly impractical and nothing really leverages those technologies for that. You mirror either at a RAID level or via a replication protocol like DRBD or HAST.

                                      1 Reply Last reply Reply Quote 0
                                      • scottalanmillerS
                                        scottalanmiller @GotiTServicesInc
                                        last edited by

                                        @GotiTServicesInc said:

                                        ....although the rebuild time should be faster as long as you don't have a failure in both JBOD arrays on a single server?

                                        There is no JBOD. You wouldn't have that in any situation.

                                        1 Reply Last reply Reply Quote 0
                                        • dafyreD
                                          dafyre
                                          last edited by

                                          What is it that folks like HP, NetApp, EMC, et al. do? Do they do it at the block level or via some type of other method like DRBD?

                                          scottalanmillerS 1 Reply Last reply Reply Quote 0
                                          • scottalanmillerS
                                            scottalanmiller @GotiTServicesInc
                                            last edited by

                                            @GotiTServicesInc said:

                                            So really the only correct solution is a RAID 10 ....? I still feel like you'd be stuck with a whole lot of rebuild time if a drive failed in one of the arrays

                                            A drive failure on RAID 10 is always the same no matter how big the array is. A drive resilver on RAID 10 is always, without exception, just a RAID 1 pair doing a block by block copy from one drive to its pair. That's it. So if you were using 4TB drives like our example, the rebuild time is the time that it takes to copy one drive to another directly. That's all. It's the smallest rebuild time possible for any RAID system. You really can't make it faster than that.

                                            1 Reply Last reply Reply Quote 0
                                            • 1
                                            • 2
                                            • 3
                                            • 2 / 3
                                            • First post
                                              Last post