Infrastructure Needed for Hypervisor Cluster
-
@DustinB3403 said in Infrastructure Needed for Hypervisor Cluster:
You take several smaller boxes and create a virtual, larger box out of the individual smaller boxes.
When you say that I think of LPAR combining servers (Bull, Hitachi).
HCI is just about doing for networking and storage what virtualization has already done for computing. -
@StorageNinja said in Infrastructure Needed for Hypervisor Cluster:
@DustinB3403 said in Infrastructure Needed for Hypervisor Cluster:
You take several smaller boxes and create a virtual, larger box out of the individual smaller boxes.
When you say that I think of LPAR combining servers (Bull, Hitachi).
HCI is just about doing for networking and storage what virtualization has already done for computing.That is the goal, do the same thing that virtualization has, but across more boxes with networking.
That's the most simplistic way to explain HCI to laymen.
-
@DustinB3403 said in Infrastructure Needed for Hypervisor Cluster:
@StorageNinja said in Infrastructure Needed for Hypervisor Cluster:
@DustinB3403 said in Infrastructure Needed for Hypervisor Cluster:
You take several smaller boxes and create a virtual, larger box out of the individual smaller boxes.
When you say that I think of LPAR combining servers (Bull, Hitachi).
HCI is just about doing for networking and storage what virtualization has already done for computing.That is the goal, do the same thing that virtualization has, but across more boxes with networking.
That's the most simplistic way to explain HCI to laymen.
Sort of, but it then begs the question of "Didn't SAN and VLAN already do that?" And they did, so it's not a great definitely all on its own.
-
@StorageNinja said in Infrastructure Needed for Hypervisor Cluster:
3PAR is Active/active symmetric architecture with a full fiber mesh between controllers. Most cases where I've seen issues were tied to firmware on SSD's (Specifically the ~4TB Samsung ones) and people making giant RAID 5 pools, or people trying to move the array while it's running (yes this is dumb).
It's good stuff, but it's good enough that a lot of people stop considering it as real hardware, and start thinking of it as magic.
-
@scottalanmiller said in Infrastructure Needed for Hypervisor Cluster:
@DustinB3403 said in Infrastructure Needed for Hypervisor Cluster:
@StorageNinja said in Infrastructure Needed for Hypervisor Cluster:
@DustinB3403 said in Infrastructure Needed for Hypervisor Cluster:
You take several smaller boxes and create a virtual, larger box out of the individual smaller boxes.
When you say that I think of LPAR combining servers (Bull, Hitachi).
HCI is just about doing for networking and storage what virtualization has already done for computing.That is the goal, do the same thing that virtualization has, but across more boxes with networking.
That's the most simplistic way to explain HCI to laymen.
Sort of, but it then begs the question of "Didn't SAN and VLAN already do that?" And they did, so it's not a great definitely all on its own.
But SAN and VLAN don't when you purchase 1 SAN and X servers on top of it to go back and connect to it.
-
@StorageNinja said in Infrastructure Needed for Hypervisor Cluster:
@DustinB3403 said in Infrastructure Needed for Hypervisor Cluster:
You take several smaller boxes and create a virtual, larger box out of the individual smaller boxes.
When you say that I think of LPAR combining servers (Bull, Hitachi).
or grid computing
-
@DustinB3403 said in Infrastructure Needed for Hypervisor Cluster:
But SAN and VLAN don't when you purchase 1 SAN and X servers on top of it to go back and connect to it.
They still "do for storage what virtualization did for computing" for most people - which is allow consolidation and abstraction.
-
@scottalanmiller said in Infrastructure Needed for Hypervisor Cluster:
@DustinB3403 said in Infrastructure Needed for Hypervisor Cluster:
But SAN and VLAN don't when you purchase 1 SAN and X servers on top of it to go back and connect to it.
They still "do for storage what virtualization did for computing" for most people - which is allow consolidation and abstraction.
I suppose if you are going from a bunch of 1U servers with six 300GB 10 NL disks to two 1U servers with 2 disks and a SAN sitting behind it that it looks consolidated. . .
-
If I were building my own for a lab, I'd install whatever hypervisor on RAID 1 or 10 (or 5, if I can get SSDs) and StarWind VSAN on both of them and go...
-
@DustinB3403 said in Infrastructure Needed for Hypervisor Cluster:
@scottalanmiller said in Infrastructure Needed for Hypervisor Cluster:
@DustinB3403 said in Infrastructure Needed for Hypervisor Cluster:
But SAN and VLAN don't when you purchase 1 SAN and X servers on top of it to go back and connect to it.
They still "do for storage what virtualization did for computing" for most people - which is allow consolidation and abstraction.
I suppose if you are going from a bunch of 1U servers with six 300GB 10 NL disks to two 1U servers with 2 disks and a SAN sitting behind it that it looks consolidated. . .
SAN has always been for storage consolidation. That was its only real purpose for a long time. Using it for anything else was a recent concept. SAN's primary functionality from inception to today was "cost savings through consolidation at the expense of all other primary factors such as performance, reliability, etc."
-
@scottalanmiller But my point is from looking at it in layman terms, seeing 3 boxes, verses seeing 6 boxes means "WOOT I saved money"
When the reality is that it likely cost as much or more with going with a well designed, more reliable approach.
-
@scottalanmiller said in Infrastructure Needed for Hypervisor Cluster:
Sort of, but it then begs the question of "Didn't SAN and VLAN already do that?" And they did, so it's not a great definitely all on its own.
VLAN's don't provide end to end transport across long distances (unless your that insane person who believes in running layer 2 between continents or data centers at the physical underlay, and want to risk the spanning tree gods destroying your data center). VLAN's don't provide portability of networks across sites. VLAN's don't provide consistent layer 3 and layer 7 security and edge services between hardware. Yes I know PVLAN's exist, and no they don't do all or really any of this (Just useful for guest to guest isolation). Microsegmentation, security service insertion, VxLAN gateways and overlays, policies that stick to VM's (or users of VM's) and follow them etc fall under modern networking virtualization services.
Hypervisors provided similar features to mainframes of old (LPAR) but did so on generic servers, without the need for proprietary hardware. SAN's typically ended up with proprietary disk arrays, and while storage virtualization is a thing, it's generally always tied to one proprietary platform that it hair-pinned through. SDS systems also exist, but your dedicating compute to these platforms while HCI is about being able to flex that pool of resources for storage, compute and networking functions.
Notice I saw generic servers and not just x86. ARM HCI is upon us
-
@DustinB3403 said in Infrastructure Needed for Hypervisor Cluster:
I suppose if you are going from a bunch of 1U servers with six 300GB 10 NL disks to two 1U servers with 2 disks and a SAN sitting behind it that it looks consolidated. . .
I'm more a fan of not using spinning drives for boot devices. Flash SATADOM, M.2 devices. Even USB/SD cards (Slower on boot, have to redirect logs) tend to have better thermal resistance to spinning disks.
-
@StorageNinja said in Infrastructure Needed for Hypervisor Cluster:
VLAN's don't provide end to end transport across long distances (unless your that insane person who believes in running layer 2 between continents or data centers at the physical underlay, and want to risk the spanning tree gods destroying your data center). VLAN's don't provide portability of networks across sites. VLAN's don't provide consistent layer 3 and layer 7 security and edge services between hardware. Yes I know PVLAN's exist, and no they don't do all or really any of this (Just useful for guest to guest isolation). Microsegmentation, security service insertion, VxLAN gateways and overlays, policies that stick to VM's (or users of VM's) and follow them etc fall under modern networking virtualization services.
HC doesn't address any of that, either, though.
-
@scottalanmiller said in Infrastructure Needed for Hypervisor Cluster:
@EddieJennings said in Infrastructure Needed for Hypervisor Cluster:
So if HA isn't necessary, you could potentially have nodes with various hardware -- such as in my lab where I've accumulated two different servers with different hardware specs: a Dell R310 and a Dell T420.
Sure. People do that all of the time.
Excellent. So back in my lab with two dissimilar servers, I'd have the same hypervisor on each, then one of them would have a VM running the application used to create and manage the cluster. Example applications would be oVirt if I'm wanting to use KVM on the nodes, or perhaps Failover Cluster Manager if I wanted to use Hyper-V on the nodes..
-
@EddieJennings said in Infrastructure Needed for Hypervisor Cluster:
@scottalanmiller said in Infrastructure Needed for Hypervisor Cluster:
@EddieJennings said in Infrastructure Needed for Hypervisor Cluster:
So if HA isn't necessary, you could potentially have nodes with various hardware -- such as in my lab where I've accumulated two different servers with different hardware specs: a Dell R310 and a Dell T420.
Sure. People do that all of the time.
Excellent. So back in my lab with two dissimilar servers, I'd have the same hypervisor on each, then one of them would have a VM running the application used to create and manage the cluster. Example applications would be oVirt if I'm wanting to use KVM on the nodes, or perhaps Failover Cluster Manager if I wanted to use Hyper-V on the nodes..
Yeah, that's a way to go. oVirt can be external, too.
-
@EddieJennings said in Infrastructure Needed for Hypervisor Cluster:
@scottalanmiller said in Infrastructure Needed for Hypervisor Cluster:
@EddieJennings said in Infrastructure Needed for Hypervisor Cluster:
So if HA isn't necessary, you could potentially have nodes with various hardware -- such as in my lab where I've accumulated two different servers with different hardware specs: a Dell R310 and a Dell T420.
Sure. People do that all of the time.
Excellent. So back in my lab with two dissimilar servers, I'd have the same hypervisor on each, then one of them would have a VM running the application used to create and manage the cluster. Example applications would be oVirt if I'm wanting to use KVM on the nodes, or perhaps Failover Cluster Manager if I wanted to use Hyper-V on the nodes..
I don't think two nodes is enough if you want to play with clusters. Better to have more nodes with less ram/cpu and storage. Like 4 or 6 or something.
Maybe try to find a used multi-node server. Many manufacturers makes them - Dell, HPE, Fujitsu, IBM, Supermicro, Intel etc. They're not blade servers, more often 2U servers with 2 or 4 motherboards inside. I guess you could for blade servers too. Search for node server and you'll find them.
PS. I see you have Dell servers. In the Dell world it's the PowerEdge C series servers that are their multinode machines.
-
@Pete-S said in Infrastructure Needed for Hypervisor Cluster:
@EddieJennings said in Infrastructure Needed for Hypervisor Cluster:
@scottalanmiller said in Infrastructure Needed for Hypervisor Cluster:
@EddieJennings said in Infrastructure Needed for Hypervisor Cluster:
So if HA isn't necessary, you could potentially have nodes with various hardware -- such as in my lab where I've accumulated two different servers with different hardware specs: a Dell R310 and a Dell T420.
Sure. People do that all of the time.
Excellent. So back in my lab with two dissimilar servers, I'd have the same hypervisor on each, then one of them would have a VM running the application used to create and manage the cluster. Example applications would be oVirt if I'm wanting to use KVM on the nodes, or perhaps Failover Cluster Manager if I wanted to use Hyper-V on the nodes..
I don't think two nodes is enough if you want to play with clusters. Better to have more nodes with less ram/cpu and storage. Like 4 or 6 or something.
Maybe try to find a used multi-node server. Many manufacturers makes them - Dell, HPE, Fujitsu, IBM, Supermicro, Intel etc. They're not blade servers, more often 2U servers with 2 or 4 motherboards inside. I guess you could for blade servers too. Search for node server and you'll find them.
PS. I see you have Dell servers. In the Dell world it's the PowerEdge C series servers that are their multinode machines.
I'd love to have more, but two is what I have. I think for my initial goal of just learning to build a cluster of greater than 1 server can still be achieved.
-
@EddieJennings said in Infrastructure Needed for Hypervisor Cluster:
@Pete-S said in Infrastructure Needed for Hypervisor Cluster:
@EddieJennings said in Infrastructure Needed for Hypervisor Cluster:
@scottalanmiller said in Infrastructure Needed for Hypervisor Cluster:
@EddieJennings said in Infrastructure Needed for Hypervisor Cluster:
So if HA isn't necessary, you could potentially have nodes with various hardware -- such as in my lab where I've accumulated two different servers with different hardware specs: a Dell R310 and a Dell T420.
Sure. People do that all of the time.
Excellent. So back in my lab with two dissimilar servers, I'd have the same hypervisor on each, then one of them would have a VM running the application used to create and manage the cluster. Example applications would be oVirt if I'm wanting to use KVM on the nodes, or perhaps Failover Cluster Manager if I wanted to use Hyper-V on the nodes..
I don't think two nodes is enough if you want to play with clusters. Better to have more nodes with less ram/cpu and storage. Like 4 or 6 or something.
Maybe try to find a used multi-node server. Many manufacturers makes them - Dell, HPE, Fujitsu, IBM, Supermicro, Intel etc. They're not blade servers, more often 2U servers with 2 or 4 motherboards inside. I guess you could for blade servers too. Search for node server and you'll find them.
PS. I see you have Dell servers. In the Dell world it's the PowerEdge C series servers that are their multinode machines.
I'd love to have more, but two is what I have. I think for my initial goal of just learning to build a cluster of greater than 1 server can still be achieved.
When I looked at it I came to the conclusion that I would need a minimum of three nodes and they should be the same CPU generation and have the same NIC configuration.
But I guess you could even do it with just one server and use nested virtualization. The question is how realistic its going to be compared to a real cluster. But maybe it will be enough.
-
If you wanted to purchase cheap servers (excluding xByte as they are more for production in terms of cost and warranty) you might get more bang for your buck from a vendor like OrangeComputers.com