ML
    • Recent
    • Categories
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    How Does Local Storage Offer High Availability

    IT Discussion
    storage replicated local storage rls high availability san risk
    7
    95
    30.1k
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • D
      Dashrender @dafyre
      last edited by

      @dafyre said:

      @scottalanmiller said:

      If redundancy provides that reliability, no problem. If magic fairy dust does, that's fine too.

      Where can I find 3 boxes of Magic fairy dust? My supplies are starting to run low, lol.

      That's kinda been my whole point though. If redundancy doesn't provide a better perception of reliability, then why bother with it?

      If I knew that redundancy wasn't going to help improve the perception of reliability, I'd much rather work on a single server that I knew was going to fail and restore it from backup when the failure happens.

      I've been on both sides of that road.

      You keep using the term perception what does perception have to do with anything?

      1 Reply Last reply Reply Quote 1
      • D
        Dashrender @dafyre
        last edited by

        @dafyre said:

        @scottalanmiller said:

        @dafyre said:

        Up until my experience with an almost fully virtualized infrastructure, I would rather have reliable servers.

        However, after my experience with virtualized infrastructure, my mindset changed.

        It should not change. Resultant reliability is the only value.

        Right. Mine changed because the reliability of the single systems we had (on the budget that we had to work with) resulted in systems being not reliable as they should have been.

        The resultant reliability of having two VMware servers with replicated storage was increased, because the perception was that the system was more reliable because things did not go down near as often as was happening otherwise.

        That's not perception - that's reality. You found one option, an option through redundancy that provided you with reliability.

        The lack of redundancy does not mean lack of reliability. You're continued stance on perception seems to imply that not having redundancy would mean you would have less or no reliability.

        I'd argue, in the case of virtualization, redundancy is often a major player in reliability, but not a sole requirement.

        S 1 Reply Last reply Reply Quote 4
        • S
          scottalanmiller @dafyre
          last edited by

          @dafyre said:

          @scottalanmiller said:

          @dafyre said:

          Up until my experience with an almost fully virtualized infrastructure, I would rather have reliable servers.

          However, after my experience with virtualized infrastructure, my mindset changed.

          It should not change. Resultant reliability is the only value.

          Right. Mine changed because the reliability of the single systems we had (on the budget that we had to work with) resulted in systems being not reliable as they should have been.

          The resultant reliability of having two VMware servers with replicated storage was increased, because the perception was that the system was more reliable because things did not go down near as often as was happening otherwise.

          I'm confused, though. Sure, you improved reliability (I'm confused about the perception bit too) but why did this make you change your mindset versus a single reliable server? Since you didn't use a single reliable server for comparison, what changed the mindset?

          D 1 Reply Last reply Reply Quote 0
          • S
            scottalanmiller @Dashrender
            last edited by

            @Dashrender said:

            I'd argue, in the case of virtualization, redundancy is often a major player in reliability, but not a sole requirement.

            I'd argue that virtualization is a red herring. It's good and we should always have it, and high availability systems always have (doing back to the 1960s.) But it's not a factor here.

            Redundancy is the most common means of getting reliability, but it is definitely not the sole means.

            1 Reply Last reply Reply Quote 1
            • S
              scottalanmiller @Dashrender
              last edited by

              @Dashrender said:

              I guess the term independent in RAID is what drives @scottalanmiller point the most. Redundant Array of Independent Drives = so at a drive only level, Independent, the drives are Redundant, and they are in an Array..

              Wow - I've never looked at it this way before.

              I think the "it must mean the data" perception probably comes from the fact that many people state that RAID is about improving reliability. But it isn't. That's a big reason that people choose it, but RAID is about increasing speed, capacity and/or reliability by using cheap Winchester drives rather than using some other drive type. It's one of the three.

              So when we look at it that way, RAID 0 has both redundancy (meaning more than one disk) AND redundancy (meaning something can fail and something else takes over) in two of three instances.

              If we need a cache with increased speed over a single drive and we have a five disk RAID 0, then one fails, we just go down to a four disk RAID 0. Not as fast as before, but still faster than a single drive.

              W D 2 Replies Last reply Reply Quote 1
              • W
                wirestyle22 @scottalanmiller
                last edited by

                @scottalanmiller said:

                @Dashrender said:

                I guess the term independent in RAID is what drives @scottalanmiller point the most. Redundant Array of Independent Drives = so at a drive only level, Independent, the drives are Redundant, and they are in an Array..

                Wow - I've never looked at it this way before.

                I think the "it must mean the data" perception probably comes from the fact that many people state that RAID is about improving reliability. But it isn't. That's a big reason that people choose it, but RAID is about increasing speed, capacity and/or reliability by using cheap Winchester drives rather than using some other drive type. It's one of the three.

                So when we look at it that way, RAID 0 has both redundancy (meaning more than one disk) AND redundancy (meaning something can fail and something else takes over) in two of three instances.

                If we need a cache with increased speed over a single drive and we have a five disk RAID 0, then one fails, we just go down to a four disk RAID 0. Not as fast as before, but still faster than a single drive.

                That is definitely an interesting way to look at it.

                1 Reply Last reply Reply Quote 0
                • D
                  Dashrender @scottalanmiller
                  last edited by

                  @scottalanmiller said:

                  So when we look at it that way, RAID 0 has both redundancy (meaning more than one disk) AND redundancy (meaning something can fail and something else takes over) in two of three instances.

                  If we need a cache with increased speed over a single drive and we have a five disk RAID 0, then one fails, we just go down to a four disk RAID 0. Not as fast as before, but still faster than a single drive.

                  That may be so, but who would care, because in your RAID 0 if you loose any drives, all of your data is gone, so being redundant is pointless in that case - the only think you care about with RAID 0 is the array for performance, not reliability.

                  S 1 Reply Last reply Reply Quote 0
                  • D
                    Dashrender
                    last edited by

                    @scottalanmiller

                    Is it possible to have a system failover to another system with zero actual failure?

                    of course I know the answer is yes, we've seen this in video where a laptop is watching a video that's streaming from one VM and that VM is moved/failed over to another server and the video either never stops... or has a small kinda pause, but no actual failure.

                    S 1 Reply Last reply Reply Quote 0
                    • S
                      scottalanmiller @Dashrender
                      last edited by

                      @Dashrender said:

                      That may be so, but who would care, because in your RAID 0 if you loose any drives, all of your data is gone, so being redundant is pointless in that case - the only think you care about with RAID 0 is the array for performance, not reliability.

                      You are stuck on the idea that your array always carries stateful data. That's an incorrect assumption. RAID 0 arrays can be perfectly functional when degraded if they are not used for stateful data. So the redundancy remains fully useful.

                      D 1 Reply Last reply Reply Quote 0
                      • D
                        Dashrender @scottalanmiller
                        last edited by

                        @scottalanmiller said:

                        I'm confused, though. Sure, you improved reliability (I'm confused about the perception bit too) but why did this make you change your mindset versus a single reliable server? Since you didn't use a single reliable server for comparison, what changed the mindset?

                        I agree with Scott.

                        Just to keep this going, @dafyre please tell us what the old failing system looked like. Was it 10 server each with internal disks? What was failing?

                        S 1 Reply Last reply Reply Quote 0
                        • S
                          scottalanmiller @Dashrender
                          last edited by

                          @Dashrender said:

                          of course I know the answer is yes, we've seen this in video where a laptop is watching a video that's streaming from one VM and that VM is moved/failed over to another server and the video either never stops... or has a small kinda pause, but no actual failure.

                          There can be zero pause, but the cost gets higher and higher to do that stuff. And there are other penalties. Like IBM, HP and Oracle all makes systems that will allow you to rip CPUs out of them while they are running. No blips. But they introduce some latency for all operations to make this possible.

                          W 1 Reply Last reply Reply Quote 0
                          • W
                            wirestyle22 @scottalanmiller
                            last edited by

                            @scottalanmiller said:

                            @Dashrender said:

                            of course I know the answer is yes, we've seen this in video where a laptop is watching a video that's streaming from one VM and that VM is moved/failed over to another server and the video either never stops... or has a small kinda pause, but no actual failure.

                            There can be zero pause, but the cost gets higher and higher to do that stuff. And there are other penalties. Like IBM, HP and Oracle all makes systems that will allow you to rip CPUs out of them while they are running. No blips. But they introduce some latency for all operations to make this possible.

                            Even the fact that this is possible is amazing to me

                            S 1 Reply Last reply Reply Quote 0
                            • S
                              scottalanmiller @Dashrender
                              last edited by

                              @Dashrender said:

                              Just to keep this going, @dafyre please tell us what the old failing system looked like. Was it 10 server each with internal disks? What was failing?

                              And it doesn't mean that the old system was "bad", it could have just been normal.

                              Two HP Proliant DL380 servers in a cluster (if the clustering is good) is way more reliable than a single Proliant DL380.

                              But are two of them as reliable as a single HP Integrity SuperDome? Not likely. Those things never go down. Never. It's unheard of.

                              Now which is more cost effective? Buying 100 Proliants instead of one SuperDome, of course. Which is more powerful? One SuperDome.

                              W 1 Reply Last reply Reply Quote 0
                              • S
                                scottalanmiller @wirestyle22
                                last edited by

                                @wirestyle22 said:

                                Even the fact that this is possible is amazing to me

                                Ever see an HP Integrity withstand an artillery round? There is a video of an HP Integrity doing that (easily ten years old) and another one of an HP 3PAR SAN taking one (more recent, actually the video was made by @HPEStorageGuy who is here in the community.) The HP 3PAR is basically HP's "mini computer" class of storage (same class as the HP Integrity is in servers).

                                In both cases, they fired an artillery round into the chassis of a running HP system (bolted to a surface of course as the thing would have gone flying) and in both cases the system stayed up and running, didn't lose a ping.

                                W 1 Reply Last reply Reply Quote 2
                                • W
                                  wirestyle22 @scottalanmiller
                                  last edited by

                                  @scottalanmiller said:

                                  @Dashrender said:

                                  Just to keep this going, @dafyre please tell us what the old failing system looked like. Was it 10 server each with internal disks? What was failing?

                                  And it doesn't mean that the old system was "bad", it could have just been normal.

                                  Two HP Proliant DL380 servers in a cluster (if the clustering is good) is way more reliable than a single Proliant DL380.

                                  But are two of them as reliable as a single HP Integrity SuperDome? Not likely. Those things never go down. Never. It's unheard of.

                                  Now which is more cost effective? Buying 100 Proliants instead of one SuperDome, of course. Which is more powerful? One SuperDome.

                                  Can you clarify as to what you mean? What reason do they attribute to a higher uptime than a ProLiant if they are both configured correctly? Honest question.

                                  S 1 Reply Last reply Reply Quote 0
                                  • W
                                    wirestyle22 @scottalanmiller
                                    last edited by

                                    @scottalanmiller said:

                                    @wirestyle22 said:

                                    Even the fact that this is possible is amazing to me

                                    Ever see an HP Integrity withstand an artillery round? There is a video of an HP Integrity doing that (easily ten years old) and another one of an HP 3PAR SAN taking one (more recent, actually the video was made by @HPEStorageGuy who is here in the community.) The HP 3PAR is basically HP's "mini computer" class of storage (same class as the HP Integrity is in servers).

                                    In both cases, they fired an artillery round into the chassis of a running HP system (bolted to a surface of course as the thing would have gone flying) and in both cases the system stayed up and running, didn't lose a ping.

                                    That's wild. HP is doin' it right now.

                                    S 1 Reply Last reply Reply Quote 0
                                    • D
                                      Dashrender @scottalanmiller
                                      last edited by

                                      @scottalanmiller said:

                                      @Dashrender said:

                                      That may be so, but who would care, because in your RAID 0 if you loose any drives, all of your data is gone, so being redundant is pointless in that case - the only think you care about with RAID 0 is the array for performance, not reliability.

                                      You are stuck on the idea that your array always carries stateful data. That's an incorrect assumption. RAID 0 arrays can be perfectly functional when degraded if they are not used for stateful data. So the redundancy remains fully useful.

                                      really? the array will stay active in a degraded state? I had no idea - I figured the RAID controller would basically just kill the array once a drive was lost. yep me and assuming = mistake...

                                      S 1 Reply Last reply Reply Quote 1
                                      • S
                                        scottalanmiller @wirestyle22
                                        last edited by

                                        @wirestyle22 said:

                                        Can you clarify as to what you mean? What reason do they attribute to a higher uptime than a ProLiant if they are both configured correctly? Honest question.

                                        So the HPE Proliant line is a micro-computer line based on the PC architecture. They are, just for clarify, the industry reference standard for commodity servers (generally considered the best in the business going back to the Compaq Proliant era in the mid-1990s.) They are very good, but they are "commodity". They are basically no different (more or less) than any PC you could build yourself with parts you order online (this is not totally true, there is a tonne of HPE unique engineering, they are tested like crazy, they have custom firmware and boards, they buy parts better than are on the open market, they add some proprietary stuff like the ILO, etc.) but more or less, these are PCs. The DL380 is the best selling server in the world, from any vendor, in any category.

                                        The HPE Integrity line is a mini-computer line. They are not PCs. Most of them (not all) are built on the IA64 EPIC architecture and have RAS [Reliability, availability and serviceability] features that the PC architecture does not support. For example, hot swappable memory and CPUs are standard. Things like redundant controllers are common. The overall build and design is less about cost savings and more about never failing (or being fixed without going down.) It's a truly different class of device. They are also bigger devices, you don't put one in just to run your website. But you can fit more workloads on them, making it make more sense to invest in a single device that almost never fails.

                                        W 1 Reply Last reply Reply Quote 2
                                        • scottalanmillerS
                                          scottalanmiller @wirestyle22
                                          last edited by

                                          @wirestyle22 said:

                                          In both cases, they fired an artillery round into the chassis of a running HP system (bolted to a surface of course as the thing would have gone flying) and in both cases the system stayed up and running, didn't lose a ping.

                                          That's wild. HP is doin' it right now.

                                          HP has been doing this stuff for decades. This isn't new technology. You can get similar from IBM, Oracle and Fujitsu. Dell does not dabble in the mini and mainframe market.

                                          From IBM this would be the i and z series (i is mini and z is main). From Oracle this is the M series. Fujitsu makes the M series for Oracle (they co-design it and Fujitsu makes it) and sells it themselves under their own branding that I don't know as it is not sold in America, you just buy the Oracle branded ones.

                                          1 Reply Last reply Reply Quote 2
                                          • wirestyle22W
                                            wirestyle22 @scottalanmiller
                                            last edited by

                                            @scottalanmiller said:

                                            @wirestyle22 said:

                                            Can you clarify as to what you mean? What reason do they attribute to a higher uptime than a ProLiant if they are both configured correctly? Honest question.

                                            So the HPE Proliant line is a micro-computer line based on the PC architecture. They are, just for clarify, the industry reference standard for commodity servers (generally considered the best in the business going back to the Compaq Proliant era in the mid-1990s.) They are very good, but they are "commodity". They are basically no different (more or less) than any PC you could build yourself with parts you order online (this is not totally true, there is a tonne of HPE unique engineering, they are tested like crazy, they have custom firmware and boards, they buy parts better than are on the open market, they add some proprietary stuff like the ILO, etc.) but more or less, these are PCs. The DL380 is the best selling server in the world, from any vendor, in any category.

                                            The HPE Integrity line is a mini-computer line. They are not PCs. Most of them (not all) are built on the IA64 EPIC architecture and have RAS [Reliability, availability and serviceability] features that the PC architecture does not support. For example, hot swappable memory and CPUs are standard. Things like redundant controllers are common. The overall build and design is less about cost savings and more about never failing (or being fixed without going down.) It's a truly different class of device. They are also bigger devices, you don't put one in just to run your website. But you can fit more workloads on them, making it make more sense to invest in a single device that almost never fails.

                                            Interesting. Thank you for the information.

                                            scottalanmillerS 1 Reply Last reply Reply Quote 0
                                            • 1
                                            • 2
                                            • 3
                                            • 4
                                            • 5
                                            • 4 / 5
                                            • First post
                                              Last post