The VSA is the Ugly Result of Legacy Vendor Lock-Out
-
@Breffni-Potter said in The VSA is the Ugly Result of Legacy Vendor Lock-Out:
Is this really the case? I'm sceptical that a VMWare or HyperV or even a XenServer based system would have that huge a difference in performance requirements compared with a Scale system.
"24 vCores and up to 300GB RAM (depending on the vendor) just to power the VSA’s and boot themselves vs HES using a fraction of a core per node and 6GB RAM total. Efficiency matters."
Is this genuine or is it a flippant example? If it's genuine...shut up and take my money.
From Starwind's LSFS FAQ
"How much RAM do I need for LSFS device to function properly?
4.6 MB of RAM per 1 GB of LSFS device with disabled deduplication,
7.6 MB of RAM per 1 GB of LSFS device with enabled deduplication."So, yeah, could easily eat up that much ram. ~7.6GB RAM per TB of storage.
I didn't spot the CPU recommendation, but I know it's beefy.
-
@Breffni-Potter It is absolutely genuine. For example, Simplivity requires that a minimum of 48 GB RAM PER-Node be reserved for their VSA with an entry level, with the higher end nodes taking 100GB RAM for the VSA per node. In some of their older gear, the number was around 150GB per node. With Nutanix, the number with all features turned off starts at 16GB per node, but jumps up to 32 or somewhat more per node as features are turned on. Same story with all the other varietal VSA based vendors. Basically, a VSA is not free, it is a virtualized SAN, and they run an instance of it on every node in their architectures, with the associated resource consumption. - The VSA didn't eliminate the SAN, it virtualized it then replicated it over and over. That is just on the RAM side of things. Then there is cpu core usage associated with each VSA - cores and ram going to run the VSA's instead of the actual workloads. In HC3, we not only eliminated the SAN, we did so without using a VSA at all, so those "reserved" resources go directly into actually running VM's, all the while streamlining the IO path so that there is a dramatic reduction in the number of hops it takes to do things like change a period to a comma.
-
@Aconboy said
In HC3, we not only eliminated the SAN, we did so without using a VSA at all, so those "reserved" resources go directly into actually running VM's, all the while streamlining the IO path so that there is a dramatic reduction in the number of hops it takes to do things like change a period to a comma.
I want one now
-
@Breffni-Potter said in The VSA is the Ugly Result of Legacy Vendor Lock-Out:
@Aconboy said
In HC3, we not only eliminated the SAN, we did so without using a VSA at all, so those "reserved" resources go directly into actually running VM's, all the while streamlining the IO path so that there is a dramatic reduction in the number of hops it takes to do things like change a period to a comma.
I want one NOW
ftfy
-
Just curious: How exactly does your product differ from StarWind Virtual SAN in this context?
-
@thwr said in The VSA is the Ugly Result of Legacy Vendor Lock-Out:
Just curious: How exactly does your product differ from StarWind Virtual SAN in this context?
The really, really high level technical different is that VSA / VSAN approach is a layer on top of the hypervisor that has to run as a guest workload. The Scale system puts the storage layer at the same spot that a normal filesystem/LVM would be. It is part of the hypervisor natively and acts just like a filesystem or DRBD. It isn't that it has zero overhead, but it has extremely little as it's just part of the hypervisor itself.
Starwind will vary heavily from ESXi to Hyper-V as it requires a full VM on one and not on the other.
-
@scottalanmiller said in The VSA is the Ugly Result of Legacy Vendor Lock-Out:
@thwr said in The VSA is the Ugly Result of Legacy Vendor Lock-Out:
Just curious: How exactly does your product differ from StarWind Virtual SAN in this context?
The really, really high level technical different is that VSA / VSAN approach is a layer on top of the hypervisor that has to run as a guest workload. The Scale system puts the storage layer at the same spot that a normal filesystem/LVM would be. It is part of the hypervisor natively and acts just like a filesystem or DRBD. It isn't that it has zero overhead, but it has extremely little as it's just part of the hypervisor itself.
Starwind will vary heavily from ESXi to Hyper-V as it requires a full VM on one and not on the other.
Ah ok, thx
-
@thwr @breffni-potter @travisdh1 - we have just released our 1150 platform which brings all features and functionalities with both flash and spinning disk in at a pricepoint under $30k USD for a complete 3 node cluster.
-
@Aconboy said in The VSA is the Ugly Result of Legacy Vendor Lock-Out:
@thwr @breffni-potter @travisdh1 - we have just released our 1150 platform which brings all features and functionalities with both flash and spinning disk in at a pricepoint under $30k USD for a complete 3 node cluster.
Trust me, if we needed more than a single server I'd have a cluster!
-
@Aconboy said in The VSA is the Ugly Result of Legacy Vendor Lock-Out:
@thwr @breffni-potter @travisdh1 - we have just released our 1150 platform which brings all features and functionalities with both flash and spinning disk in at a pricepoint under $30k USD for a complete 3 node cluster.
Could you give us some numbers? Like what's included in the 30k? Storage capacity, CPU cores, NICs, upgrade paths...
-
@thwr sure thing
the 1150 ships with a baseline of 8 broadwell cores per node with the E5-2620v4 upgradable to the e5-2640v4 with 10 cores per node. It ships with 64 GB RAM upgradable to 256 GB per node. It ships with either a 480 GB, 960 GB, or 1.92 TB eMLC ssd per node and 3 1,2, or 4 TB NL-SAS drives per node. Each node has quad gigabit or quad 10gig nics. All features and functionalities are included (HA, DR, multi-site replication, up to 5982 snapshots per-vm, auto tiering with HEAT staging and destaging and automatic prioritization of workload IO to name a few). All 1150 nodes can be joined with all other scale node families and generations both forward and back, so upgrade paths are not artificially limited. -
@Aconboy said in The VSA is the Ugly Result of Legacy Vendor Lock-Out:
@thwr sure thing
the 1150 ships with a baseline of 8 broadwell cores per node with the E5-2620v4 upgradable to the e5-2640v4 with 10 cores per node. It ships with 64 GB RAM upgradable to 256 GB per node. It ships with either a 480 GB, 960 GB, or 1.92 TB eMLC ssd per node and 3 1,2, or 4 TB NL-SAS drives per node. Each node has quad gigabit or quad 10gig nics. All features and functionalities are included (HA, DR, multi-site replication, up to 5982 snapshots per-vm, auto tiering with HEAT staging and destaging and automatic prioritization of workload IO to name a few). All 1150 nodes can be joined with all other scale node families and generations both forward and back, so upgrade paths are not artificially limited.Sounds good, what about deduplication / compression? For example, I've got a small 3 node Hyper-V cluster right now with roughly 15TB of (more or less hot) storage. 70% of the VMs are 2008R2 (will be upgraded to 2016 next year), rest is Linux and BSD.
-
No dedupe or compression on the Scale storage. But you can always do that at a higher later with the OS or whatever if you need it. That works in most cases.
-
@scottalanmiller said in The VSA is the Ugly Result of Legacy Vendor Lock-Out:
No dedupe or compression on the Scale storage. But you can always do that at a higher later with the OS or whatever if you need it. That works in most cases.
right! windows server has a recent dedupe so a VM with WS2012R2 will do the trick
-
@travisdh1 said in The VSA is the Ugly Result of Legacy Vendor Lock-Out:
Alright, I have to ask. Is Starwind able to get access to the hardware level drive access like this in Hyper-V? @KOOLER (sorry, forgetting the others around here with Starwind.)
on hyper-v we'll run a mix of a kernel-mode drivers and user-land services and we'll get direct access to hardware
on vmware we'll use hypervisor and will "talk" eventually to VMDK with a data container
-
@travisdh1 said in The VSA is the Ugly Result of Legacy Vendor Lock-Out:
@Breffni-Potter said in The VSA is the Ugly Result of Legacy Vendor Lock-Out:
Is this really the case? I'm sceptical that a VMWare or HyperV or even a XenServer based system would have that huge a difference in performance requirements compared with a Scale system.
"24 vCores and up to 300GB RAM (depending on the vendor) just to power the VSA’s and boot themselves vs HES using a fraction of a core per node and 6GB RAM total. Efficiency matters."
Is this genuine or is it a flippant example? If it's genuine...shut up and take my money.
From Starwind's LSFS FAQ
"How much RAM do I need for LSFS device to function properly?
4.6 MB of RAM per 1 GB of LSFS device with disabled deduplication,
7.6 MB of RAM per 1 GB of LSFS device with enabled deduplication."So, yeah, could easily eat up that much ram. ~7.6GB RAM per TB of storage.
I didn't spot the CPU recommendation, but I know it's beefy.
you don't always use LSFS with starwind
and if you use lsfs you don't always enable dedupe
and we're offloading hash tables for nvme flash now so upcoming update will have ZERO overhead for dedupe
supported combinations are
-
flash for capacity and ram for hash tables => FAAAAAAAAAST !!
-
spinning disk for capacity and nvme flash for hash tables => somehow slower but because of a spinning disk of course
-
-
Thanks @KOOLER.
-
@KOOLER Limits if I recall are 1TB file size max, no virtual machines, post process only, 32KB block size (but Variable Block at least right?) 2016 should raise the limit.
Advantage to doing data reduction on the back end is you can dedupe out common applications and OS files between Virtual machines. That said flash is so cheap (~55 cents per GB for enterprise grade storage) throwing hardware at the problem has its advantages...
-
They did this for the sole reason that this was the only way to continue providing their solutions based on the legacy vendors and their lock out and lack of access.
Not sure where this information came from.
PernixData, SanDisk’s Flashsoft, and ScaleIO have all used kernel modules with vSphere...The reason these vendors use VSA's is a combination of factors, largest of which writing kernel code is hard...