Starwind/Server Limitations
-
Paging @Oksana
-
@Jimmy9008 said in Starwind/Server Limitations:
I figure the new servers would be seeing the same performance as if they were connected to a physical SAN.
Each to its own, extremely low latency, extremely high performance SAN, yes. So.... nothing like a SAN, basically, lol.
-
@Jimmy9008 said in Starwind/Server Limitations:
My worry is that as the existing hosts are already running VMs, Starwind and hold the data, could they be a bottleneck.
The bottleneck scaling up is your switch. Just make sure you don't exhaust the backplane.
-
@scottalanmiller said in Starwind/Server Limitations:
@Jimmy9008 said in Starwind/Server Limitations:
My worry is that as the existing hosts are already running VMs, Starwind and hold the data, could they be a bottleneck.
The bottleneck scaling up is your switch. Just make sure you don't exhaust the backplane.
I'll take a look. Traffic is quite light, but will see what metrics I can get.
With the vSAN, having the second set of drives in the chassis as the storage for the second cluster, would we expect to see a bottleneck on the hosts at all? I am running Live Optics to see what they are currently doing...
-
@Jimmy9008 said in Starwind/Server Limitations:
The thing is though, this storage will not be used by the Failover Cluster on the existing hosts. I am looking to purchase additional hosts, add them to the iSCSI network, and build a new cluster using the vSAN storage on the existing nodes.
Here is the big question.... why?
-
@scottalanmiller said in Starwind/Server Limitations:
@Jimmy9008 said in Starwind/Server Limitations:
The thing is though, this storage will not be used by the Failover Cluster on the existing hosts. I am looking to purchase additional hosts, add them to the iSCSI network, and build a new cluster using the vSAN storage on the existing nodes.
Here is the big question.... why?
Why, which part?
-
@Jimmy9008 said in Starwind/Server Limitations:
With the vSAN, having the second set of drives in the chassis as the storage for the second cluster, would we expect to see a bottleneck on the hosts at all? I am running Live Optics to see what they are currently doing...
Oh yes, I didn't notice all the details. The new hosts that don't have their own storage will just be using a SAN. Actually using a SAN in the "don't do that" kind of way that we always say. Will it work? Yes. But it's just a SAN. Not a vSAN, a SAN.
-
@scottalanmiller said in Starwind/Server Limitations:
@Jimmy9008 said in Starwind/Server Limitations:
With the vSAN, having the second set of drives in the chassis as the storage for the second cluster, would we expect to see a bottleneck on the hosts at all? I am running Live Optics to see what they are currently doing...
Oh yes, I didn't notice all the details. The new hosts that don't have their own storage will just be using a SAN. Actually using a SAN in the "don't do that" kind of way that we always say. Will it work? Yes. But it's just a SAN. Not a vSAN, a SAN.
Correct. Its a SAN. Not a vSAN. But in this case, as the storage presented to the cluster over the network from Starwind and is redundant, its better than 1 x SAN. I just want to somehow be sure that the hosts having the additional I/O of the new cluster wont cause any performance issues to the existing cluster that sit upon them.
-
So, the cluster storage is mirrored between vSAN host 1 and vSAN host 2, and is then attached to the cluster. Plus, redundant switch. So in this case, no IPOD design as we can lose 1x of anything and stay up.
-
@Jimmy9008 said in Starwind/Server Limitations:
But in this case, as the storage presented to the cluster over the network from Starwind and is redundant, its better than 1 x SAN.
Oh yes, it's a SAN cluster, but you are "always" supposed to have a cluster when having a SAN as a starting point anyway.
The only real factor is here is that it "already exists." Except, it doesn't actually, you are adding drives specifically to use it in this way rather than putting those drives in the machine that will use them.
-
@Jimmy9008 said in Starwind/Server Limitations:
So, the cluster storage is mirrored between vSAN host 1 and vSAN host 2, and is then attached to the cluster. Plus, redundant switch. So in this case, no IPOD design as we can lose 1x of anything and stay up.
You are misunderstanding how an IPOD works. It's still an IPOD, it's still an inverted pyramid. It's better than a totally misdesigned IPOD, it's a "proper IPOD", but still 100% an IPOD.
-
@Jimmy9008 in your design, your normal Starwind nodes have one point of failure, no dependencies. But on the "other" nodes, without their own storage, they depend on the SAN, the switches, and themselves. Three points of failure, instead of one.
Why not put the drives directly in the nodes and avoid that? What's the reason to put their drives remotely?
-
@scottalanmiller said in Starwind/Server Limitations:
@Jimmy9008 in your design, your normal Starwind nodes have one point of failure, no dependencies. But on the "other" nodes, without their own storage, they depend on the SAN, the switches, and themselves. Three points of failure, instead of one.
Why not put the drives directly in the nodes and avoid that? What's the reason to put their drives remotely?
Licensing from the sounds of it.
-
@coliver said in Starwind/Server Limitations:
@scottalanmiller said in Starwind/Server Limitations:
@Jimmy9008 in your design, your normal Starwind nodes have one point of failure, no dependencies. But on the "other" nodes, without their own storage, they depend on the SAN, the switches, and themselves. Three points of failure, instead of one.
Why not put the drives directly in the nodes and avoid that? What's the reason to put their drives remotely?
Licensing from the sounds of it.
Correct, by adding to the existing vSAN, that storage side is under the Starwind SLA/Support.
-
@Jimmy9008 said in Starwind/Server Limitations:
@coliver said in Starwind/Server Limitations:
@scottalanmiller said in Starwind/Server Limitations:
@Jimmy9008 in your design, your normal Starwind nodes have one point of failure, no dependencies. But on the "other" nodes, without their own storage, they depend on the SAN, the switches, and themselves. Three points of failure, instead of one.
Why not put the drives directly in the nodes and avoid that? What's the reason to put their drives remotely?
Licensing from the sounds of it.
Correct, by adding to the existing vSAN, that storage side is under the Starwind SLA/Support.
I see.
-
@Jimmy9008 said in Starwind/Server Limitations:
@coliver said in Starwind/Server Limitations:
@scottalanmiller said in Starwind/Server Limitations:
@Jimmy9008 in your design, your normal Starwind nodes have one point of failure, no dependencies. But on the "other" nodes, without their own storage, they depend on the SAN, the switches, and themselves. Three points of failure, instead of one.
Why not put the drives directly in the nodes and avoid that? What's the reason to put their drives remotely?
Licensing from the sounds of it.
Correct, by adding to the existing vSAN, that storage side is under the Starwind SLA/Support.
I was wondering if this was the case upon reading the OP.
-
@Jimmy9008
Taking into account the cluster specification you mentioned, StarWind is not the bottleneck as the network configuration might be. For the similar setup of StarWind HyperConverged Appliances, we install, at least, 25 GbE network adapters and switches, however, 40 GbE would be preferable. I believe you should benchmark the network performance, whether 10 GbE network is the bottleneck or not for your environment.As it was emphasized, use Live Optics to get current performance utilization picture by your production. Additionally, you might use diskspd to benchmark storage performance and network utilization.
Please tag or dm me should you have any questions.
Have a nice day!