Our New Scale Cluster Arrives Tomorrow
-
Getting lots of VMs built out as we get the cluster up and ready for large scale lab usage.
-
Live Migrations working as we keep using the system even while the nodes update themselves.
-
So what's Scale using? I assume the hardware is all dell boxes? I assume there is a VSAN? What hypervisor are they using?
-
@anonymous said:
So what's Scale using? I assume the hardware is all dell boxes? I assume there is a VSAN? What hypervisor are they using?
KVM.
-
@anonymous said:
So what's Scale using? I assume the hardware is all dell boxes? I assume there is a VSAN? What hypervisor are they using?
Not VSAN, this is a bit more advanced than that. It is a RAIN storage cluster using proprietary block mirroring. So no need for SAN links between nodes, the storage talks over 10GigE backplane directly node to node to handle storage replication. The VMs are on a local filesystem, not a remote one.
Hypervisor is KVM.
Hardware is all Dell. This is the newest hardware yet, so no one has it but us, it is R430 nodes.
-
@scottalanmiller Could I build something like this myself?
-
@anonymous said:
@scottalanmiller Could I build something like this myself?
Well, kind of. Scale is unique in that the storage layer is fully integrated with KVM, so that is not something that you could do yourself. But, in theory, you could use technologies like Gluster or CEPH to replicate some of the behaviour. You could use KVM or Xen on top of that. You would be lacking their "no master interface" front end, but that you can do without.
So from a "all local storage cluster" perspective, yes you could build your own hyperconverged solution for sure. It would be a different storage implementation than Scale's but you can still do RAIN. What you will lack is the automatic node ability to just add and subtract nodes, but you can do that manually and how often do you add nodes?
What makes the Scale special is the full stack testing and support. The whole thing is built and tested as a unit. All firmware is heavily tested, right down to the drive models. It's pretty extreme - which is why they only offer three models.
-
@scottalanmiller I assume there mega expensive?
-
@scottalanmiller said:
@anonymous said:
So what's Scale using? I assume the hardware is all dell boxes? I assume there is a VSAN? What hypervisor are they using?
Not VSAN, this is a bit more advanced than that. It is a RAIN storage cluster using proprietary block mirroring. So no need for SAN links between nodes, the storage talks over 10GigE backplane directly node to node to handle storage replication. The VMs are on a local filesystem, not a remote one.
Hypervisor is KVM.
Hardware is all Dell. This is the newest hardware yet, so no one has it but us, it is R430 nodes.
I'm assuming it's proprietary? A shared backplane between three + hosts and 2 plus sets of disk? Is it, or is it not using a 10 GbE switch (or two) to connect all of the equipment?
AKA could someone buy off the shelf Dell equipment and replicate this? -
@anonymous said:
@scottalanmiller I assume there mega expensive?
Not really. Ours is a big more expensive as we are not on the entry level model and we are going up to six nodes, three of them being pure SSD. So ours is not indicative. But when you consider the hardware cost of three high end R430 nodes, 10GigE cards and such that you would need if building your own or not and that you get support it's pretty competitive. Quite hard to build your own system to compete with it, in fact. Trying to match the reliability and storage efficiency and performance would be hard. You'd likely have to overbuild significantly. If you don't need HA or don't need anywhere near the "scale", obviously you would be overbuying. But once you get into their range, they are extremely cost effective.
-
@Dashrender said:
AKA could someone buy off the shelf Dell equipment and replicate this?
All of the hardware is off of the shelf. It's all Dell line items that could be ordered from Dell directly. Except for the Scale faceplates, of course.
-
@Dashrender said:
Is it, or is it not using a 10 GbE switch (or two) to connect all of the equipment?10GigE Dell switch on ours for the backplane. It's a dedicated node communications channel, not open to the LAN, which is also 10GigE.
-
In a standard setup, each node has 2x 10GigE links to the bankplane and 2x 10GigE links to the LAN.
-
-
@scottalanmiller said:
In a standard setup, each node has 2x 10GigE links to the bankplane and 10x 10GigE links to the LAN.
Each node has 10, count them 10 links to the LAN? You need 30 10GigE ports - damnation! that's a huge chunk of change!
-
Sorry, slipped into binary there. 2 in decimal.
-
@scottalanmiller said:
Sorry, slipped into binary there. 2 in decimal.
ROFL - OK.. that makes more sense. I was thinking, my god, what would you need 100 GB network access for, per host.
2 I'm guessing, mostly for redundancy.
-
@Dashrender Yes, it is active/passive.
-
@scottalanmiller said:
@Dashrender said:
Is it, or is it not using a 10 GbE switch (or two) to connect all of the equipment?10GigE Dell switch on ours for the backplane. It's a dedicated node communications channel, not open to the LAN, which is also 10GigE.
Is there a redundant switch in the backplane?
-
@coliver said:
@scottalanmiller said:
@Dashrender said:
Is it, or is it not using a 10 GbE switch (or two) to connect all of the equipment?10GigE Dell switch on ours for the backplane. It's a dedicated node communications channel, not open to the LAN, which is also 10GigE.
Is there a redundant switch in the backplane?
Not at the moment on ours, but normally yes of course.