The VSA is the Ugly Result of Legacy Vendor Lock-Out
-
@Aconboy said in The VSA is the Ugly Result of Legacy Vendor Lock-Out:
@thwr @breffni-potter @travisdh1 - we have just released our 1150 platform which brings all features and functionalities with both flash and spinning disk in at a pricepoint under $30k USD for a complete 3 node cluster.
Could you give us some numbers? Like what's included in the 30k? Storage capacity, CPU cores, NICs, upgrade paths...
-
@thwr sure thing
the 1150 ships with a baseline of 8 broadwell cores per node with the E5-2620v4 upgradable to the e5-2640v4 with 10 cores per node. It ships with 64 GB RAM upgradable to 256 GB per node. It ships with either a 480 GB, 960 GB, or 1.92 TB eMLC ssd per node and 3 1,2, or 4 TB NL-SAS drives per node. Each node has quad gigabit or quad 10gig nics. All features and functionalities are included (HA, DR, multi-site replication, up to 5982 snapshots per-vm, auto tiering with HEAT staging and destaging and automatic prioritization of workload IO to name a few). All 1150 nodes can be joined with all other scale node families and generations both forward and back, so upgrade paths are not artificially limited. -
@Aconboy said in The VSA is the Ugly Result of Legacy Vendor Lock-Out:
@thwr sure thing
the 1150 ships with a baseline of 8 broadwell cores per node with the E5-2620v4 upgradable to the e5-2640v4 with 10 cores per node. It ships with 64 GB RAM upgradable to 256 GB per node. It ships with either a 480 GB, 960 GB, or 1.92 TB eMLC ssd per node and 3 1,2, or 4 TB NL-SAS drives per node. Each node has quad gigabit or quad 10gig nics. All features and functionalities are included (HA, DR, multi-site replication, up to 5982 snapshots per-vm, auto tiering with HEAT staging and destaging and automatic prioritization of workload IO to name a few). All 1150 nodes can be joined with all other scale node families and generations both forward and back, so upgrade paths are not artificially limited.Sounds good, what about deduplication / compression? For example, I've got a small 3 node Hyper-V cluster right now with roughly 15TB of (more or less hot) storage. 70% of the VMs are 2008R2 (will be upgraded to 2016 next year), rest is Linux and BSD.
-
No dedupe or compression on the Scale storage. But you can always do that at a higher later with the OS or whatever if you need it. That works in most cases.
-
@scottalanmiller said in The VSA is the Ugly Result of Legacy Vendor Lock-Out:
No dedupe or compression on the Scale storage. But you can always do that at a higher later with the OS or whatever if you need it. That works in most cases.
right! windows server has a recent dedupe so a VM with WS2012R2 will do the trick
-
@travisdh1 said in The VSA is the Ugly Result of Legacy Vendor Lock-Out:
Alright, I have to ask. Is Starwind able to get access to the hardware level drive access like this in Hyper-V? @KOOLER (sorry, forgetting the others around here with Starwind.)
on hyper-v we'll run a mix of a kernel-mode drivers and user-land services and we'll get direct access to hardware
on vmware we'll use hypervisor and will "talk" eventually to VMDK with a data container
-
@travisdh1 said in The VSA is the Ugly Result of Legacy Vendor Lock-Out:
@Breffni-Potter said in The VSA is the Ugly Result of Legacy Vendor Lock-Out:
Is this really the case? I'm sceptical that a VMWare or HyperV or even a XenServer based system would have that huge a difference in performance requirements compared with a Scale system.
"24 vCores and up to 300GB RAM (depending on the vendor) just to power the VSA’s and boot themselves vs HES using a fraction of a core per node and 6GB RAM total. Efficiency matters."
Is this genuine or is it a flippant example? If it's genuine...shut up and take my money.
From Starwind's LSFS FAQ
"How much RAM do I need for LSFS device to function properly?
4.6 MB of RAM per 1 GB of LSFS device with disabled deduplication,
7.6 MB of RAM per 1 GB of LSFS device with enabled deduplication."So, yeah, could easily eat up that much ram. ~7.6GB RAM per TB of storage.
I didn't spot the CPU recommendation, but I know it's beefy.
you don't always use LSFS with starwind
and if you use lsfs you don't always enable dedupe
and we're offloading hash tables for nvme flash now so upcoming update will have ZERO overhead for dedupe
supported combinations are
-
flash for capacity and ram for hash tables => FAAAAAAAAAST !!
-
spinning disk for capacity and nvme flash for hash tables => somehow slower but because of a spinning disk of course
-
-
Thanks @KOOLER.
-
@KOOLER Limits if I recall are 1TB file size max, no virtual machines, post process only, 32KB block size (but Variable Block at least right?) 2016 should raise the limit.
Advantage to doing data reduction on the back end is you can dedupe out common applications and OS files between Virtual machines. That said flash is so cheap (~55 cents per GB for enterprise grade storage) throwing hardware at the problem has its advantages...
-
They did this for the sole reason that this was the only way to continue providing their solutions based on the legacy vendors and their lock out and lack of access.
Not sure where this information came from.
PernixData, SanDisk’s Flashsoft, and ScaleIO have all used kernel modules with vSphere...The reason these vendors use VSA's is a combination of factors, largest of which writing kernel code is hard...