ML
    • Recent
    • Categories
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    Storage Virtualization / Hyperconvergence Technologies - Best Use Case?

    IT Discussion
    maxta pernix storage atlantis
    7
    21
    3.0k
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • NetworkNerdN
      NetworkNerd
      last edited by NetworkNerd

      After seeing some storage virtualization vendors at Spiceworld like Infinio and Maxta, it makes me wonder how applicable / valuable those types of technologies would be to the SMB. I've heard of Pernix Data as well as Atlantis who also do this kind of thing. It's normally some kind of software / virtual appliance that pools your local storage / NAS / DAS / SAN together and uses host RAM / other storage to cache. I'm not saying they all do this, but that has been my general takeaway. Some advertise to do deduplication and storage reclamation. I think you can install most of these non-intrusively in your VMWare environment.

      So, if you were starting over with server virtualization at your company, would you look at maybe getting servers with all local storage, 7200 RPM drives, and using one of these software technologies over going to 10K SAS, SSDs, or vSAN?

      Has anyone out there used any of these technologies? Should we be pointing to these kinds of things rather than suggesting something beyond local storage?

      I'd love to get your feedback because these technologies are very interesting to me.

      scottalanmillerS 3 Replies Last reply Reply Quote 0
      • ?
        A Former User
        last edited by A Former User

        I generally have used local SAS drives for virtualzation anyway it provides another level of protection over using 1 SAN for everything. We had a SAN but it was used for Database stuff only. Not sure I'd go with 7200RPM. I'd stay the happy medium of 10k instead of 15k.
        I have used 7200RPM/10K Enterpise sata drives for file server where large files are needed with no problems. I'm not sure how well a 7200rpm even in RAID would handle multiple VMs. Booting them up would be the major issue there though, after boot it might handle most things fine.

        scottalanmillerS 1 Reply Last reply Reply Quote 0
        • scottalanmillerS
          scottalanmiller @NetworkNerd
          last edited by

          @NetworkNerd said:

          Has anyone out there used any of these technologies? Should we be pointing to these kinds of things rather than suggesting something beyond local storage?

          These are actually pretty much the only technologies that you would want to be looking at for virtualization today. Using NAS or SAN is very passe and technically not ideal unless you are using them as part of a much larger storage strategy in which the virtualization is only one portion. Using SAN or NAS specifically for virtualization has never actually been a good use case. The use of it came from the enterprise space where large SAN was already in place and heavily used for non-virtualization needs and virtualization just leveraged an existing storage framework because it was there, not because it was ideal.

          There are exceptions, but it is rare that you would want production VM workloads running on anything but converged, software defined storage once you outgrow the needs of pure local storage - which alone covers most use cases.

          The exceptions start to happen when the environment gets so big that the storage team and the virtualization and platform teams need a complete separation of duties for legal or political reasons. Then hyperconverged is not an option but similar technologies like Gluster, still are.

          1 Reply Last reply Reply Quote 0
          • scottalanmillerS
            scottalanmiller @A Former User
            last edited by

            @thecreativeone91 said:

            I'm not sure how well a 7200rpm even in RAID would handle multiple VMs.

            Drive speed is just that. A 7.2K drive is 72% the speed of a 10K drive. So if a 10K drive is 100 IOPS, that makes a 7.2K drive 72 IOPS.

            So if you had a four drive RAID 10 of 10K drives, you have 400 Read IOPS. Do a six drive RAID 10 of 7.2K drives and you have 432 IOPS.

            Drive speed is never a factor on its own. You only select drive speeds as part of a holistic storage subsystem. If you ever get the feeling that a drive is "too slow" or you only want a "happy medium", step back and remember that individual spindle turn rates is only one part of the performance picture and is never a factor on its own.

            1 Reply Last reply Reply Quote 0
            • scottalanmillerS
              scottalanmiller @NetworkNerd
              last edited by

              @NetworkNerd said:

              So, if you were starting over with server virtualization at your company, would you look at maybe getting servers with all local storage, 7200 RPM drives, and using one of these software technologies over going to 10K SAS, SSDs, or vSAN?

              These technologies might change the IOPS equation some but the need for faster spindle speeds, SSDs and other technologies remain. You still need to look at the big picture. No amount of caching can completely overcome drive subsystem speeds.

              1 Reply Last reply Reply Quote 0
              • scottalanmillerS
                scottalanmiller @NetworkNerd
                last edited by

                @NetworkNerd said:

                After seeing some storage virtualization vendors at Spiceworld like Infinio and Maxta, it makes me wonder how applicable / valuable those types of technologies would be to the SMB.

                Infinio is just a cache. It assumes that you still have external storage. It is designed to accelerate your SAN or NAS to make it work even better. Which is a great idea. It does this by using system RAM and CPU which, in turn, means that you lose those resources for your VMs. It's a great idea but not without tradeoffs and it does nothing to change the need for storage, just makes it possible for existing storage to work better. And it works best in a large pool of virtualization servers, not lone ones (or else the entire cache is only <8GB.)

                http://www.infinio.com/product/how-it-works

                Using Infinio eats up two vCPUs and 8GB of RAM on each host. So consider that when looking at the big picture. If you have a single virtualization platform you will lose a tiny bit of CPU performance and 8GB of RAM. If you started with 64GB, your platform just dropped to 56GB. Not exactly a trivial shrinkage. That's between one and eight typical VMs that you can't run because you are adding this cache - per host.

                1 Reply Last reply Reply Quote 0
                • NetworkNerdN
                  NetworkNerd
                  last edited by

                  I remember Maxta and Pernix as well as Atlantis saying they do storage reclamation and dedupe. But I think each has it's own virtual appliance that runs on each host to be able to do this.

                  scottalanmillerS 1 Reply Last reply Reply Quote 0
                  • NetworkNerdN
                    NetworkNerd
                    last edited by

                    Infinio sounded cool but will only work for NAS or SAN from what I remember - no local storage or DAS (at least not right now).

                    scottalanmillerS 1 Reply Last reply Reply Quote 0
                    • scottalanmillerS
                      scottalanmiller @NetworkNerd
                      last edited by

                      @NetworkNerd said:

                      Infinio sounded cool but will only work for NAS or SAN from what I remember - no local storage or DAS (at least not right now).

                      DAS should work, I would be pretty surprised if it had any means of detecting when something was DAS or SAN since the only difference is if there is a switch hooked up.

                      1 Reply Last reply Reply Quote 0
                      • scottalanmillerS
                        scottalanmiller @NetworkNerd
                        last edited by

                        @NetworkNerd said:

                        I remember Maxta and Pernix as well as Atlantis saying they do storage reclamation and dedupe. But I think each has it's own virtual appliance that runs on each host to be able to do this.

                        That's pretty much what they would have to do, which is how VSA worked. It's about the only available approach when working in that way.

                        1 Reply Last reply Reply Quote 0
                        • NetworkNerdN
                          NetworkNerd
                          last edited by

                          I thought the VSA had to be setup a certain way from the beginning but was near impossible to add to the cluster later (because a certain amount of storage on each host was to protect against another host failing) whereas these software solutions would be able to install in an existing environment non-intrusively and allow you to add hosts / more storage at any time.

                          scottalanmillerS 1 Reply Last reply Reply Quote 0
                          • scottalanmillerS
                            scottalanmiller @NetworkNerd
                            last edited by

                            @NetworkNerd said:

                            I thought the VSA had to be setup a certain way from the beginning but was near impossible to add to the cluster later (because a certain amount of storage on each host was to protect against another host failing) whereas these software solutions would be able to install in an existing environment non-intrusively and allow you to add hosts / more storage at any time.

                            Yes, but they are all VMs.

                            DashrenderD 1 Reply Last reply Reply Quote 0
                            • DashrenderD
                              Dashrender @scottalanmiller
                              last edited by

                              @scottalanmiller said:

                              @NetworkNerd said:

                              I thought the VSA had to be setup a certain way from the beginning but was near impossible to add to the cluster later (because a certain amount of storage on each host was to protect against another host failing) whereas these software solutions would be able to install in an existing environment non-intrusively and allow you to add hosts / more storage at any time.

                              Yes, but they are all VMs.

                              I think NetworkNerd is saying that you can't (his and my understanding) add VSA after the fact because the underlying disk that ESXi is using is already partitioned off, so there won't be any free space, or most likely not enough, to implement VSA after the fact?

                              I didn't know VSA used a VM on each host to do it's job. How does it control the disk beneath the other VMs?

                              scottalanmillerS NetworkNerdN 2 Replies Last reply Reply Quote 1
                              • scottalanmillerS
                                scottalanmiller @Dashrender
                                last edited by

                                @Dashrender said:

                                I didn't know VSA used a VM on each host to do it's job. How does it control the disk beneath the other VMs?

                                You can build your own VSA to see how it works. You can do it with Linux or BSD quite easily. You build a virtual NAS (which is what VSA means) and use DRBD (Linux) or HAST (BSD) to make the cluster work. You share the storage to the local machine via NFS. Now you have a VM that can provide storage for the other VMs locally.

                                NetworkNerdN 1 Reply Last reply Reply Quote 0
                                • NetworkNerdN
                                  NetworkNerd @Dashrender
                                  last edited by

                                  @Dashrender said:

                                  @scottalanmiller said:

                                  @NetworkNerd said:

                                  I thought the VSA had to be setup a certain way from the beginning but was near impossible to add to the cluster later (because a certain amount of storage on each host was to protect against another host failing) whereas these software solutions would be able to install in an existing environment non-intrusively and allow you to add hosts / more storage at any time.

                                  Yes, but they are all VMs.

                                  I think NetworkNerd is saying that you can't (his and my understanding) add VSA after the fact because the underlying disk that ESXi is using is already partitioned off, so there won't be any free space, or most likely not enough, to implement VSA after the fact?

                                  I didn't know VSA used a VM on each host to do it's job. How does it control the disk beneath the other VMs?

                                  Yep - that's exactly what I meant.

                                  1 Reply Last reply Reply Quote 0
                                  • NetworkNerdN
                                    NetworkNerd @scottalanmiller
                                    last edited by

                                    @scottalanmiller said:

                                    @Dashrender said:

                                    I didn't know VSA used a VM on each host to do it's job. How does it control the disk beneath the other VMs?

                                    You can build your own VSA to see how it works. You can do it with Linux or BSD quite easily. You build a virtual NAS (which is what VSA means) and use DRBD (Linux) or HAST (BSD) to make the cluster work. You share the storage to the local machine via NFS. Now you have a VM that can provide storage for the other VMs locally.

                                    Quite easily to SAM is not so easy to the person who is semi-familiar with Linux.

                                    coliverC art_of_shredA 2 Replies Last reply Reply Quote 3
                                    • coliverC
                                      coliver @NetworkNerd
                                      last edited by

                                      @NetworkNerd said:

                                      @scottalanmiller said:

                                      @Dashrender said:

                                      I didn't know VSA used a VM on each host to do it's job. How does it control the disk beneath the other VMs?

                                      You can build your own VSA to see how it works. You can do it with Linux or BSD quite easily. You build a virtual NAS (which is what VSA means) and use DRBD (Linux) or HAST (BSD) to make the cluster work. You share the storage to the local machine via NFS. Now you have a VM that can provide storage for the other VMs locally.

                                      Quite easily to SAM is not so easy to the person who is semi-familiar with Linux.

                                      It does sound like a cool project to try out to get more familiar with those technologies though. If I find some spare hardware I may dig into it to test it out.

                                      1 Reply Last reply Reply Quote 0
                                      • art_of_shredA
                                        art_of_shred Banned @NetworkNerd
                                        last edited by

                                        @NetworkNerd said:

                                        @scottalanmiller said:

                                        @Dashrender said:

                                        I didn't know VSA used a VM on each host to do it's job. How does it control the disk beneath the other VMs?

                                        You can build your own VSA to see how it works. You can do it with Linux or BSD quite easily. You build a virtual NAS (which is what VSA means) and use DRBD (Linux) or HAST (BSD) to make the cluster work. You share the storage to the local machine via NFS. Now you have a VM that can provide storage for the other VMs locally.

                                        Quite easily to SAM is not so easy to the person who is semi-familiar with Linux.

                                        Something SAM needs to be reminded of occasionally.

                                        NetworkNerdN 1 Reply Last reply Reply Quote 2
                                        • NetworkNerdN
                                          NetworkNerd @art_of_shred
                                          last edited by

                                          @art_of_shred said:

                                          @NetworkNerd said:

                                          @scottalanmiller said:

                                          @Dashrender said:

                                          I didn't know VSA used a VM on each host to do it's job. How does it control the disk beneath the other VMs?

                                          You can build your own VSA to see how it works. You can do it with Linux or BSD quite easily. You build a virtual NAS (which is what VSA means) and use DRBD (Linux) or HAST (BSD) to make the cluster work. You share the storage to the local machine via NFS. Now you have a VM that can provide storage for the other VMs locally.

                                          Quite easily to SAM is not so easy to the person who is semi-familiar with Linux.

                                          Something SAM needs to be reminded of occasionally.

                                          That's why you are here, Art - to slap him around a bit. 🙂

                                          art_of_shredA 1 Reply Last reply Reply Quote 1
                                          • art_of_shredA
                                            art_of_shred Banned @NetworkNerd
                                            last edited by

                                            @NetworkNerd said:

                                            @art_of_shred said:

                                            @NetworkNerd said:

                                            @scottalanmiller said:

                                            @Dashrender said:

                                            I didn't know VSA used a VM on each host to do it's job. How does it control the disk beneath the other VMs?

                                            You can build your own VSA to see how it works. You can do it with Linux or BSD quite easily. You build a virtual NAS (which is what VSA means) and use DRBD (Linux) or HAST (BSD) to make the cluster work. You share the storage to the local machine via NFS. Now you have a VM that can provide storage for the other VMs locally.

                                            Quite easily to SAM is not so easy to the person who is semi-familiar with Linux.

                                            Something SAM needs to be reminded of occasionally.

                                            That's why you are here, Art - to slap him around a bit. 🙂

                                            Well, I'm here to chew bubble gum and slap people ...and I'm all out of bubble gum.

                                            1 Reply Last reply Reply Quote 2
                                            • 1
                                            • 2
                                            • 1 / 2
                                            • First post
                                              Last post