Starwind AMA Ask Me Anything April 26 10am - 12pm EST
-
@dafyre said in Starwind AMA Ask Me Anything April 26 10am - 12pm EST:
@scottalanmiller said in Starwind AMA Ask Me Anything April 26 10am - 12pm EST:
@TheDeepStorage said in Starwind AMA Ask Me Anything April 26 10am - 12pm EST:
@Reid-Cooper said in Starwind AMA Ask Me Anything April 26 10am - 12pm EST:
So two nodes is the small end for a hyperconverged appliance (or one that you build yourself with hypervisor + Starwind VSAN), I think I'm clear there. But what about upper limits? A lot of hyperconverged appliances cap out around ten or twelve nodes in a single cluster. What are the limites from the Starwind side if I'm building Hyper-V or VMware clusters? Is it the same and a Starwind limitation? Do I just get to go to the limit of the hypervisor?
With VMware we can push it to the limit of 64 nodes using VVols. With Hyper-V, in theory, StarWind handles 64 nodes just fine, but the best practises of Microsoft suggest 1 CSV per node, which means 64 CSVs, which means one very sad IT admin with a twitching eye and chronic headaches.
The short version would be: from our side, the hypervisor is the limit.That would split up failure domains pretty heavily, though. Impacts to one part of the cluster would not ripple through to other parts.
But imagine the poor sysadmin who has to configure 64 CSVs... shudder
Better than the system admin that loses one SAN and has to explain losing 64 hosts
-
@scottalanmiller said in Starwind AMA Ask Me Anything April 26 10am - 12pm EST:
@TheDeepStorage said in Starwind AMA Ask Me Anything April 26 10am - 12pm EST:
@Reid-Cooper said in Starwind AMA Ask Me Anything April 26 10am - 12pm EST:
So two nodes is the small end for a hyperconverged appliance (or one that you build yourself with hypervisor + Starwind VSAN), I think I'm clear there. But what about upper limits? A lot of hyperconverged appliances cap out around ten or twelve nodes in a single cluster. What are the limites from the Starwind side if I'm building Hyper-V or VMware clusters? Is it the same and a Starwind limitation? Do I just get to go to the limit of the hypervisor?
With VMware we can push it to the limit of 64 nodes using VVols. With Hyper-V, in theory, StarWind handles 64 nodes just fine, but the best practises of Microsoft suggest 1 CSV per node, which means 64 CSVs, which means one very sad IT admin with a twitching eye and chronic headaches.
The short version would be: from our side, the hypervisor is the limit.That would split up failure domains pretty heavily, though. Impacts to one part of the cluster would not ripple through to other parts.
Definitely, if you can manage this cluster, it will be a very resilient environment.
Ultimately, we might consider a promotion of providing Xanax to any admin that configures 64 CSVs free of charge. -
@dafyre said in Starwind AMA Ask Me Anything April 26 10am - 12pm EST:
For the StarWind Appliances, how will those work for scaling out / adding more storage ?
Will we just be able to add another appliance or will it be more involved than that?
StarWind does support Scale-Out and this procedure is quite straightforward. You simply add additional node thus increasing your storage capacity. However, you could take another route: just add individual disks to each of the nodes expanding a storage.
-
How does @StarWind_Software work with NVMe?
-
How does the caching work on Starwind storage? I've read that I can use RAM cache, and obviously there are the disks in RAID. Can I have an SSD tier between the two? Can I have multiple tiers like a huge RAID 6 of SATA drives, a smaller RAID 10 of SAS 10Ks, a smaller SSD RAID 5 array and then the RAM on top?
-
@StrongBad said in Starwind AMA Ask Me Anything April 26 10am - 12pm EST:
How does @StarWind_Software work with NVMe?
StarWind really works with NVMe. We have just added significant performance improvement, so the write performance is doubled now.
-
@dafyre said in Starwind AMA Ask Me Anything April 26 10am - 12pm EST:
Are there plans to continue building the Powershell library, or will you be moving towards a cross-platform API?
It's actually going to be both, as Swordfish is being developed too.
-
What about Storage Spaces Direct (S2D)? Can you talk about how Starwind is competing with that and what the differentiating factors are? Maybe when we would choose one or the other if we are on Hyper-V?
-
I'll jump in with one that I know but think it is worth talking about and that people are unlikely to know to ask about...
What are the benefits of the LSFS (Log Structured File System) and when would be want to choose it?
-
@scottalanmiller said in Starwind AMA Ask Me Anything April 26 10am - 12pm EST:
I'll jump in with one that I know but think it is worth talking about and that people are unlikely to know to ask about...
What are the benefits of the LSFS (Log Structured File System) and when would be want to choose it?
The ideal scenario for LSFS is slow spindle drives in RAID 5/50 and RAID 6/60. And the benefits are: eliminating I/O blender, snapshots, boosting synchronization process and overall performance in non-write intensive environments (maximum 60% read/40% write).
-
@StrongBad said in Starwind AMA Ask Me Anything April 26 10am - 12pm EST:
How does @StarWind_Software work with NVMe?
We're also bringing NVMe storage to a new level. With added iSER support we improved hybrid environments with NVMe tiers. For all-NVMe environments we're also adding NVMf (NVMe over Fabrics) support within a few weeks. Keep in mind NVMe storage will need 100 GbE interconnect.
-
@Reid-Cooper said in Starwind AMA Ask Me Anything April 26 10am - 12pm EST:
How does the caching work on Starwind storage? I've read that I can use RAM cache, and obviously there are the disks in RAID. Can I have an SSD tier between the two? Can I have multiple tiers like a huge RAID 6 of SATA drives, a smaller RAID 10 of SAS 10Ks, a smaller SSD RAID 5 array and then the RAM on top?
StarWind can utilize both L1 cache on RAM and L2 cache on SSDs.
In regards to a specific configuration, as an example: you can have a huge RAID 6 array for your coldest data, then a moderate RAID 10 10k SAS array for your day-to-day workloads, a small RAID 5 of SSDs for I/O hungry databases and then top it off with RAM caching. That being said we do not provide automated tiering between these arrays and you would assign everything to each tier specifically. You could easily use Storage Spaces 2016 with StarWind for that functionality. Just make sure not to use SS 2012, since the storage tiering functionality on itsuckedwas suboptimal and lead us to the decision of not doing automated tiering in the first place. -
Oh, thought of another. What connection protocols are supported by Starwind? Like iSCSI, I know, as everyone talks about it. Anything else like SMB?
-
@QuixoticJeremy said in Starwind AMA Ask Me Anything April 26 10am - 12pm EST:
Oh, thought of another. What connection protocols are supported by Starwind? Like iSCSI, I know, as everyone talks about it. Anything else like SMB?
Quite a few actually. We support ISCSI, SMB 3.0, NFS 4.1, VVols just to name the main ones.
-
I have 2 identical physical hosts that my company is allowing me to set up as a proof of concept for Starwind. As we are a Hyper-V shop, I am looking for a good startup guide for this; it appears this is where I would begin: https://www.starwindsoftware.com/starwind-virtual-san-hyper-converged-2-nodes-scenario-2-nodes-with-hyper-v-cluster ??
-
@LaMerk said in Starwind AMA Ask Me Anything April 26 10am - 12pm EST:
@scottalanmiller said in Starwind AMA Ask Me Anything April 26 10am - 12pm EST:
I'll jump in with one that I know but think it is worth talking about and that people are unlikely to know to ask about...
What are the benefits of the LSFS (Log Structured File System) and when would be want to choose it?
The ideal scenario for LSFS is slow spindle drives in RAID 5/50 and RAID 6/60. And the benefits are: eliminating I/O blender, snapshots, boosting synchronization process and overall performance in non-write intensive environments (maximum 60% read/40% write).
Do you see any real benefits on RAID 10 arrays? #OBR10
-
@Stuka said in Starwind AMA Ask Me Anything April 26 10am - 12pm EST:
@StrongBad said in Starwind AMA Ask Me Anything April 26 10am - 12pm EST:
How does @StarWind_Software work with NVMe?
We're also bringing NVMe storage to a new level. With added iSER support we improved hybrid environments with NVMe tiers. For all-NVMe environments we're also adding NVMf (NVMe over Fabrics) support within a few weeks. Keep in mind NVMe storage will need 100 GbE interconnect.
Sounds great. I'm not really up on iSER. I know it is like an iSCSI upgrade, when would I / can I use it? What do I need to take advantage of that?
-
@StrongBad said in Starwind AMA Ask Me Anything April 26 10am - 12pm EST:
What about Storage Spaces Direct (S2D)? Can you talk about how Starwind is competing with that and what the differentiating factors are? Maybe when we would choose one or the other if we are on Hyper-V?
S2D is designed for large scale environments that are more focused on storage efficiency and less focused on performance. 4-nodes is the minimum recommended production scenario. 67%+ storage efficiency is achieved only on more than 6-nodes AFAIR. If your primary goal is scale - choose S2D. If your primary goal is resiliency and performance - StarWind.
-
@jt1001001 said in Starwind AMA Ask Me Anything April 26 10am - 12pm EST:
I have 2 identical physical hosts that my company is allowing me to set up as a proof of concept for Starwind. As we are a Hyper-V shop, I am looking for a good startup guide for this; it appears this is where I would begin: https://www.starwindsoftware.com/starwind-virtual-san-hyper-converged-2-nodes-scenario-2-nodes-with-hyper-v-cluster ??
Yeah, you're absolutely right. This is exactly the one to start with.
-
@TheDeepStorage said in Starwind AMA Ask Me Anything April 26 10am - 12pm EST:
@Reid-Cooper said in Starwind AMA Ask Me Anything April 26 10am - 12pm EST:
How does the caching work on Starwind storage? I've read that I can use RAM cache, and obviously there are the disks in RAID. Can I have an SSD tier between the two? Can I have multiple tiers like a huge RAID 6 of SATA drives, a smaller RAID 10 of SAS 10Ks, a smaller SSD RAID 5 array and then the RAM on top?
StarWind can utilize both L1 cache on RAM and L2 cache on SSDs.
In regards to a specific configuration, as an example: you can have a huge RAID 6 array for your coldest data, then a moderate RAID 10 10k SAS array for your day-to-day workloads, a small RAID 5 of SSDs for I/O hungry databases and then top it off with RAM caching. That being said we do not provide automated tiering between these arrays and you would assign everything to each tier specifically. You could easily use Storage Spaces 2016 with StarWind for that functionality. Just make sure not to use SS 2012, since the storage tiering functionality on itsuckedwas suboptimal and lead us to the decision of not doing automated tiering in the first place.Okay so basically there are two cache layers, L1 RAM and L2 SSD Array, and then you would have to "manually tier" anything beneath that?
Any options for sending cold storage out to cloud like S3 or B2, which is popular here?