Cannot decide between 1U servers for growing company
-
Define "At a small scale, local storage is generally the only possibly way to have HA."
In my eyes and logic. Seems centralized storage is the way to go.
I'm unsure how you can have HA cluster or setup when the storage needs are localized at the individual server level. Unless of-course, all the data is shared between all servers and replicated.
For instance.
NODE1 - has NFS shares on it
NODE2 & NODE2 - pull data off NODE1's NFS share.NODE1 suddenly goes down? Then what, as that data is localized to that server.
So just shoot me down as a noob on here. Completely changing what I know?
-
@ntoxicator said:
Essentially What I was looking to do was KVM / VM with complete HA.
Several options there. Build your own, ProxMox (not a fan for multiple reasons that I could go into but might not need to... works but isn't ideal as a product or as a company) or Scale. Scale is the only one that handles HA for you. If you build your own or do ProxMox you are pretty much limited to doing DRBD on your own which is a bit of work and requires some expertise. Or you have to get HA storage and HA SAN networking which means looking at vendors like EMC and 3PAR as starting points and tons of money.
Scale does all of this with HA in both the compute and the storage and everything. There are other vendors like Simplivity and Nutanix but neither have the technical stack of Scale and neither focuses on the market that you are in like Scale.
-
@ntoxicator said:
shit me for getting torn to shreds on here. Pissing contest.
IT gets taken a little too seriously around here but it's only because we're all really passionate about it. That's part of the community's ... charm
-
@scottalanmiller said:
@ntoxicator said:
Essentially What I was looking to do was KVM / VM with complete HA.
Several options there. Build your own, ProxMox (not a fan for multiple reasons that I could go into but might not need to... works but isn't ideal as a product or as a company) or Scale. Scale is the only one that handles HA for you. If you build your own or do ProxMox you are pretty much limited to doing DRBD on your own which is a bit of work and requires some expertise. Or you have to get HA storage and HA SAN networking which means looking at vendors like EMC and 3PAR as starting points and tons of money.
Scale does all of this with HA in both the compute and the storage and everything. There are other vendors like Simplivity and Nutanix but neither have the technical stack of Scale and neither focuses on the market that you are in like Scale.
I want to try GlusterFS with two KVM hosts and see how it works. I've seen a couple people online do it.
-
@ntoxicator said:
Define "At a small scale, local storage is generally the only possibly way to have HA."
At three or fewer physical hosts, there is no reasonable option except for local storage - it is literally impossible for non-subsidized external storage to compete at all. Once you get to four or more physical hosts there start to be possible scenarios where specific situations like giant nodes, special storage needs might make very niche scenarios make sense but only in the most extreme circumstances.
Typically the number you assume is twelve. Until you have at least twelve physical virtualization nodes (means likely around 600+ VMs) you don't even think of looking at external storage. Even at that scale external storage is unlikely, but well worth considering.
-
@johnhooks said:
I want to try GlusterFS with two KVM hosts and see how it works. I've seen a couple people online do it.
not for the faint of heart. I've worked with some huge shops that did this and it can be done, but rarely were they happy about it in the end.
-
@ntoxicator said:
shit me for getting torn to shreds on here. Pissing contest.
Sorry don't mean to tear anyone to shreds just giving advice and helping to fix some bad information... I've been told I can be a bit abrasive at times.
-
@ntoxicator said:
I'll look into HP servers before DELL. It concerns me about their price/performance now as I feel their quality has deteriorated over the years. Supermicro I know is always a good choice, as I've used them for years. its just the time to configure and build the whitelabel machines and then also the warranty/support. Comes at a cost.
I'm still catching up on older posts...
HP, Dell and SuperMicro are all good. I've used all and have been very happy with all of them.
-
@ntoxicator said:
No true experience with VMware eSXI.
Of all of the hypervisors, it is the one to avoid anyway, most of the time. Not that it is bad, it just fails to be meaningfully "as good" as any competitor.
http://mangolassi.it/topic/5082/is-the-time-for-vmware-in-the-smb-over
-
@scottalanmiller said:
@johnhooks said:
I want to try GlusterFS with two KVM hosts and see how it works. I've seen a couple people online do it.
not for the faint of heart. I've worked with some huge shops that did this and it can be done, but rarely were they happy about it in the end.
I just read some stuff saying that DRBD is much more reliable. So I'll probably give that up haha. I do want to try ovirt though just to see how it works.
-
@ntoxicator said:
ProxMox reason: It has straight KVM or OpenVZ support. also has enterprise features. Why pay more for Citrix Xen Server when its complete BULLSHIT in my eyes. Sorry.. I dont see the benefits of Citrix Xen Server (KVM based).
XenServer is based on Xen. It's very mature (second only to ESXi) and the only hypervisor that offers full PV and is the choice of the most enterprise of environments (Amazon, Rackspace and IBM clouds.) Xen is super fast, super stable and feature rich. And it is 100% free, even the Citrix packaging of it. I know people here in the community getting 20% performance increase moving from ESXi to Xen, for example. I've been using Xen for over a decade, it's pretty awesome.
If you are building a platform on your own (nothing packaged) Xen / XenServer is where I would start. ESXi I would just ignore, it rarely makes any sense at all. HyperV can be good but mostly for MS shops wanting to stick with a single vendor or people looking for specific features. All other things being equal, Xen is my go to choice due to performance, stability, enterprise support and maturity (and the PV feature rocks.)
KVM is much harder to deal with on your own but is great technology and especially good at Windows workloads (Xen is better at Linux ones) and is better for vendors to handle automation around which is why you often find it in other products.
These days, having used literally everything out there, at @ntg where we've been virtualizing for a long time (more than a decade) on X86 we use a mix of Scale and XenServer. HyperV only for testing and ESXi only when customers request it.
-
@johnhooks said:
@scottalanmiller said:
@johnhooks said:
I want to try GlusterFS with two KVM hosts and see how it works. I've seen a couple people online do it.
not for the faint of heart. I've worked with some huge shops that did this and it can be done, but rarely were they happy about it in the end.
I just read some stuff saying that DRBD is much more reliable. So I'll probably give that up haha. I do want to try ovirt though just to see how it works.
Well yeah, nothing is going to touch DRBD mirroring. It's full on network RAID 1. The difference is that Gluster can scale to massive numbers of nodes. Anything other than DRBD for two hosts would be purely for purposes of experimentation.
-
@scottalanmiller said:
@johnhooks said:
@scottalanmiller said:
@johnhooks said:
I want to try GlusterFS with two KVM hosts and see how it works. I've seen a couple people online do it.
not for the faint of heart. I've worked with some huge shops that did this and it can be done, but rarely were they happy about it in the end.
I just read some stuff saying that DRBD is much more reliable. So I'll probably give that up haha. I do want to try ovirt though just to see how it works.
Well yeah, nothing is going to touch DRBD mirroring. It's full on network RAID 1. The difference is that Gluster can scale to massive numbers of nodes. Anything other than DRBD for two hosts would be purely for purposes of experimentation.
Have you used Ganeti at all?
-
@ntoxicator said:
Right now the DATA Storage is piped through Citrix Xen Server in means of ISCSI LUN and mapped as a Drive associated to the VM. This was not smart on my behalf years ago. I would of been better to just directly attach a LUN right to Windows server using the ISCSI initiator. Everything was a blur 2 years ago when was scrambling to put the build together at the time.
Some thoughts on this bit, knowing that it is ancillary to the main topic (and about to be split to its own...)
- Best Option would be to share out directly from the NAS and never get SAN involved.
- Next best would be NFS or iSCSI to XenServer and then mapped to the VM. This is the "right way" to do it with a VM.
- Direct to the Windows VM is a "no no" both in the virtual space (it should always go through the hypervisor not the guest) and in the Windows world (Windows iSCSI is not the best.)
NFS is always preferred over iSCSI here both from the hypervisor side (XenServer, ESXi and KVM are all NFS natives) and from the NAS side (Synology, ReadyNAS, etc. are all NFS native while iSCSI is a secondary function) and from a design complexity standpoint.
-
@johnhooks said:
Have you used Ganeti at all?
No
-
@ntoxicator - As a non-sales guy from Scale (office of the CTO), I would be happy to set up a webex and go over what we do with HC3 and see if it a fit for you or not. We can dive to whatever level of technical depth you would like to go to.
-
Thank you everyone for all the information.
Still confused as to why local storage being recommended over centralized storage on a NAS?
I suppose I just gave up with Citrix Xen Server at 6.1(free) release. Was still bugs (windows drivers). Also any Linux VM's I install, there is no memory view and does not calculate total node memory usage correctly.
With NFS Storage -- I'm unaware of a practice of where I can attach a disk as LOCAL to a windows server. keep in mind. our primary domain controller has ALL! network shares that are viable to the company. This data rides on data within an iSCSI LUN which is attached to Citrix Xen Server and as a disk tied to the VM.
Might need to split this to a seperate thread on network storage and layout....
-
@ntoxicator said:
Thank you everyone for all the information.
Still confused as to why local storage being recommended over centralized storage on a NAS?
I suppose I just gave up with Citrix Xen Server at 6.1(free) release. Was still bugs (windows drivers). Also any Linux VM's I install, there is no memory view and does not calculate total node memory usage correctly.
With NFS Storage -- I'm unaware of a practice of where I can attach a disk as LOCAL to a windows server. keep in mind. our primary domain controller has ALL! network shares that are viable to the company. This data rides on data within an iSCSI LUN which is attached to Citrix Xen Server and as a disk tied to the VM.
Might need to split this to a seperate thread on network storage and layout....
Why would you want to mount NFS storage locally in Windows? Setup it up as usable storage in XenServer (or whatever hypervisor you pick) and store the virtual hard disk on it. This will look like a local disk to Windows but have the pick up and move where ever you want advantage of just being a file (because it is).
-
NOTE:
Just spoke with folks at Oracle sales, had a conference call to discuss X5-2 servers and specs. Awaiting pricing.
Also noticed ALOT of IBM server X on ebay.. newer ones at that. Not a good sign. Also relates back to how IBM didnt trust their own servers.
-
@ntoxicator said:
Still confused as to why local storage being recommended over centralized storage on a NAS?
Because HA. If you need multiple servers for your VMs to failover, you need multiple for your storage. Storage is more critical and more fragile than the host servers, so it is where you need to focus HA efforts even more. Your VMs are only as safe as your NAS is, and any NAS under $30K isn't as reliable as a cheap normal server. And there are more points of failure, not just riskier ones.
Check out these articles.
http://www.smbitjournal.com/2013/06/the-inverted-pyramid-of-doom/