ML
    • Recent
    • Categories
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    Starwind AMA Ask Me Anything April 26 10am - 12pm EST

    Scheduled Pinned Locked Moved Starwind
    starwindama
    102 Posts 15 Posters 23.9k Views
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • dafyreD
      dafyre
      last edited by

      Are there plans to continue building the Powershell library, or will you be moving towards a cross-platform API?

      ABykovskyiA StukaS 2 Replies Last reply Reply Quote 1
      • ABykovskyiA
        ABykovskyi @dafyre
        last edited by

        @dafyre said in Starwind AMA Ask Me Anything April 26 10am - 12pm EST:

        Are there plans to continue building the Powershell library, or will you be moving towards a cross-platform API?

        We expect to release regular updates of Poweshell library.

        1 Reply Last reply Reply Quote 3
        • scottalanmillerS
          scottalanmiller @TheDeepStorage
          last edited by

          @TheDeepStorage said in Starwind AMA Ask Me Anything April 26 10am - 12pm EST:

          @Reid-Cooper said in Starwind AMA Ask Me Anything April 26 10am - 12pm EST:

          So two nodes is the small end for a hyperconverged appliance (or one that you build yourself with hypervisor + Starwind VSAN), I think I'm clear there. But what about upper limits? A lot of hyperconverged appliances cap out around ten or twelve nodes in a single cluster. What are the limites from the Starwind side if I'm building Hyper-V or VMware clusters? Is it the same and a Starwind limitation? Do I just get to go to the limit of the hypervisor?

          With VMware we can push it to the limit of 64 nodes using VVols. With Hyper-V, in theory, StarWind handles 64 nodes just fine, but the best practises of Microsoft suggest 1 CSV per node, which means 64 CSVs, which means one very sad IT admin with a twitching eye and chronic headaches.
          The short version would be: from our side, the hypervisor is the limit.

          That would split up failure domains pretty heavily, though. Impacts to one part of the cluster would not ripple through to other parts.

          dafyreD TheDeepStorageT 2 Replies Last reply Reply Quote 1
          • dafyreD
            dafyre
            last edited by

            For the StarWind Appliances, how will those work for scaling out / adding more storage ?

            Will we just be able to add another appliance or will it be more involved than that?

            ABykovskyiA 1 Reply Last reply Reply Quote 1
            • dafyreD
              dafyre @scottalanmiller
              last edited by

              @scottalanmiller said in Starwind AMA Ask Me Anything April 26 10am - 12pm EST:

              @TheDeepStorage said in Starwind AMA Ask Me Anything April 26 10am - 12pm EST:

              @Reid-Cooper said in Starwind AMA Ask Me Anything April 26 10am - 12pm EST:

              So two nodes is the small end for a hyperconverged appliance (or one that you build yourself with hypervisor + Starwind VSAN), I think I'm clear there. But what about upper limits? A lot of hyperconverged appliances cap out around ten or twelve nodes in a single cluster. What are the limites from the Starwind side if I'm building Hyper-V or VMware clusters? Is it the same and a Starwind limitation? Do I just get to go to the limit of the hypervisor?

              With VMware we can push it to the limit of 64 nodes using VVols. With Hyper-V, in theory, StarWind handles 64 nodes just fine, but the best practises of Microsoft suggest 1 CSV per node, which means 64 CSVs, which means one very sad IT admin with a twitching eye and chronic headaches.
              The short version would be: from our side, the hypervisor is the limit.

              That would split up failure domains pretty heavily, though. Impacts to one part of the cluster would not ripple through to other parts.

              But imagine the poor sysadmin who has to configure 64 CSVs... shudder

              scottalanmillerS 1 Reply Last reply Reply Quote 0
              • scottalanmillerS
                scottalanmiller @dafyre
                last edited by

                @dafyre said in Starwind AMA Ask Me Anything April 26 10am - 12pm EST:

                @scottalanmiller said in Starwind AMA Ask Me Anything April 26 10am - 12pm EST:

                @TheDeepStorage said in Starwind AMA Ask Me Anything April 26 10am - 12pm EST:

                @Reid-Cooper said in Starwind AMA Ask Me Anything April 26 10am - 12pm EST:

                So two nodes is the small end for a hyperconverged appliance (or one that you build yourself with hypervisor + Starwind VSAN), I think I'm clear there. But what about upper limits? A lot of hyperconverged appliances cap out around ten or twelve nodes in a single cluster. What are the limites from the Starwind side if I'm building Hyper-V or VMware clusters? Is it the same and a Starwind limitation? Do I just get to go to the limit of the hypervisor?

                With VMware we can push it to the limit of 64 nodes using VVols. With Hyper-V, in theory, StarWind handles 64 nodes just fine, but the best practises of Microsoft suggest 1 CSV per node, which means 64 CSVs, which means one very sad IT admin with a twitching eye and chronic headaches.
                The short version would be: from our side, the hypervisor is the limit.

                That would split up failure domains pretty heavily, though. Impacts to one part of the cluster would not ripple through to other parts.

                But imagine the poor sysadmin who has to configure 64 CSVs... shudder

                Better than the system admin that loses one SAN and has to explain losing 64 hosts 🙂

                1 Reply Last reply Reply Quote 4
                • TheDeepStorageT
                  TheDeepStorage Vendor @scottalanmiller
                  last edited by

                  @scottalanmiller said in Starwind AMA Ask Me Anything April 26 10am - 12pm EST:

                  @TheDeepStorage said in Starwind AMA Ask Me Anything April 26 10am - 12pm EST:

                  @Reid-Cooper said in Starwind AMA Ask Me Anything April 26 10am - 12pm EST:

                  So two nodes is the small end for a hyperconverged appliance (or one that you build yourself with hypervisor + Starwind VSAN), I think I'm clear there. But what about upper limits? A lot of hyperconverged appliances cap out around ten or twelve nodes in a single cluster. What are the limites from the Starwind side if I'm building Hyper-V or VMware clusters? Is it the same and a Starwind limitation? Do I just get to go to the limit of the hypervisor?

                  With VMware we can push it to the limit of 64 nodes using VVols. With Hyper-V, in theory, StarWind handles 64 nodes just fine, but the best practises of Microsoft suggest 1 CSV per node, which means 64 CSVs, which means one very sad IT admin with a twitching eye and chronic headaches.
                  The short version would be: from our side, the hypervisor is the limit.

                  That would split up failure domains pretty heavily, though. Impacts to one part of the cluster would not ripple through to other parts.

                  Definitely, if you can manage this cluster, it will be a very resilient environment.
                  Ultimately, we might consider a promotion of providing Xanax to any admin that configures 64 CSVs free of charge.

                  1 Reply Last reply Reply Quote 6
                  • ABykovskyiA
                    ABykovskyi @dafyre
                    last edited by

                    @dafyre said in Starwind AMA Ask Me Anything April 26 10am - 12pm EST:

                    For the StarWind Appliances, how will those work for scaling out / adding more storage ?

                    Will we just be able to add another appliance or will it be more involved than that?

                    StarWind does support Scale-Out and this procedure is quite straightforward. You simply add additional node thus increasing your storage capacity. However, you could take another route: just add individual disks to each of the nodes expanding a storage.

                    1 Reply Last reply Reply Quote 3
                    • StrongBadS
                      StrongBad
                      last edited by

                      How does @StarWind_Software work with NVMe?

                      LaMerkL StukaS 2 Replies Last reply Reply Quote 0
                      • Reid CooperR
                        Reid Cooper
                        last edited by

                        How does the caching work on Starwind storage? I've read that I can use RAM cache, and obviously there are the disks in RAID. Can I have an SSD tier between the two? Can I have multiple tiers like a huge RAID 6 of SATA drives, a smaller RAID 10 of SAS 10Ks, a smaller SSD RAID 5 array and then the RAM on top?

                        TheDeepStorageT 1 Reply Last reply Reply Quote 2
                        • LaMerkL
                          LaMerk Vendor @StrongBad
                          last edited by

                          @StrongBad said in Starwind AMA Ask Me Anything April 26 10am - 12pm EST:

                          How does @StarWind_Software work with NVMe?

                          StarWind really works with NVMe. We have just added significant performance improvement, so the write performance is doubled now.

                          1 Reply Last reply Reply Quote 2
                          • StukaS
                            Stuka @dafyre
                            last edited by

                            @dafyre said in Starwind AMA Ask Me Anything April 26 10am - 12pm EST:

                            Are there plans to continue building the Powershell library, or will you be moving towards a cross-platform API?

                            It's actually going to be both, as Swordfish is being developed too.

                            1 Reply Last reply Reply Quote 1
                            • StrongBadS
                              StrongBad
                              last edited by

                              What about Storage Spaces Direct (S2D)? Can you talk about how Starwind is competing with that and what the differentiating factors are? Maybe when we would choose one or the other if we are on Hyper-V?

                              ABykovskyiA 1 Reply Last reply Reply Quote 0
                              • scottalanmillerS
                                scottalanmiller
                                last edited by

                                I'll jump in with one that I know but think it is worth talking about and that people are unlikely to know to ask about...

                                What are the benefits of the LSFS (Log Structured File System) and when would be want to choose it?

                                LaMerkL 1 Reply Last reply Reply Quote 1
                                • LaMerkL
                                  LaMerk Vendor @scottalanmiller
                                  last edited by

                                  @scottalanmiller said in Starwind AMA Ask Me Anything April 26 10am - 12pm EST:

                                  I'll jump in with one that I know but think it is worth talking about and that people are unlikely to know to ask about...

                                  What are the benefits of the LSFS (Log Structured File System) and when would be want to choose it?

                                  The ideal scenario for LSFS is slow spindle drives in RAID 5/50 and RAID 6/60. And the benefits are: eliminating I/O blender, snapshots, boosting synchronization process and overall performance in non-write intensive environments (maximum 60% read/40% write).

                                  scottalanmillerS 1 Reply Last reply Reply Quote 1
                                  • StukaS
                                    Stuka @StrongBad
                                    last edited by

                                    @StrongBad said in Starwind AMA Ask Me Anything April 26 10am - 12pm EST:

                                    How does @StarWind_Software work with NVMe?

                                    We're also bringing NVMe storage to a new level. With added iSER support we improved hybrid environments with NVMe tiers. For all-NVMe environments we're also adding NVMf (NVMe over Fabrics) support within a few weeks. Keep in mind NVMe storage will need 100 GbE interconnect.

                                    StrongBadS 1 Reply Last reply Reply Quote 3
                                    • TheDeepStorageT
                                      TheDeepStorage Vendor @Reid Cooper
                                      last edited by

                                      @Reid-Cooper said in Starwind AMA Ask Me Anything April 26 10am - 12pm EST:

                                      How does the caching work on Starwind storage? I've read that I can use RAM cache, and obviously there are the disks in RAID. Can I have an SSD tier between the two? Can I have multiple tiers like a huge RAID 6 of SATA drives, a smaller RAID 10 of SAS 10Ks, a smaller SSD RAID 5 array and then the RAM on top?

                                      StarWind can utilize both L1 cache on RAM and L2 cache on SSDs.
                                      In regards to a specific configuration, as an example: you can have a huge RAID 6 array for your coldest data, then a moderate RAID 10 10k SAS array for your day-to-day workloads, a small RAID 5 of SSDs for I/O hungry databases and then top it off with RAM caching. That being said we do not provide automated tiering between these arrays and you would assign everything to each tier specifically. You could easily use Storage Spaces 2016 with StarWind for that functionality. Just make sure not to use SS 2012, since the storage tiering functionality on it sucked was suboptimal and lead us to the decision of not doing automated tiering in the first place.

                                      Reid CooperR 1 Reply Last reply Reply Quote 2
                                      • QuixoticJeremyQ
                                        QuixoticJeremy
                                        last edited by

                                        Oh, thought of another. What connection protocols are supported by Starwind? Like iSCSI, I know, as everyone talks about it. Anything else like SMB?

                                        TheDeepStorageT 1 Reply Last reply Reply Quote 1
                                        • TheDeepStorageT
                                          TheDeepStorage Vendor @QuixoticJeremy
                                          last edited by

                                          @QuixoticJeremy said in Starwind AMA Ask Me Anything April 26 10am - 12pm EST:

                                          Oh, thought of another. What connection protocols are supported by Starwind? Like iSCSI, I know, as everyone talks about it. Anything else like SMB?

                                          Quite a few actually. We support ISCSI, SMB 3.0, NFS 4.1, VVols just to name the main ones.

                                          1 Reply Last reply Reply Quote 3
                                          • jt1001001J
                                            jt1001001
                                            last edited by

                                            I have 2 identical physical hosts that my company is allowing me to set up as a proof of concept for Starwind. As we are a Hyper-V shop, I am looking for a good startup guide for this; it appears this is where I would begin: https://www.starwindsoftware.com/starwind-virtual-san-hyper-converged-2-nodes-scenario-2-nodes-with-hyper-v-cluster ??

                                            LaMerkL StrongBadS 2 Replies Last reply Reply Quote 0
                                            • 1
                                            • 2
                                            • 3
                                            • 4
                                            • 5
                                            • 6
                                            • 3 / 6
                                            • First post
                                              Last post