Cannot decide between 1U servers for growing company
-
@ntoxicator said:
No true experience with VMware eSXI.
Of all of the hypervisors, it is the one to avoid anyway, most of the time. Not that it is bad, it just fails to be meaningfully "as good" as any competitor.
http://mangolassi.it/topic/5082/is-the-time-for-vmware-in-the-smb-over
-
@scottalanmiller said:
@johnhooks said:
I want to try GlusterFS with two KVM hosts and see how it works. I've seen a couple people online do it.
not for the faint of heart. I've worked with some huge shops that did this and it can be done, but rarely were they happy about it in the end.
I just read some stuff saying that DRBD is much more reliable. So I'll probably give that up haha. I do want to try ovirt though just to see how it works.
-
@ntoxicator said:
ProxMox reason: It has straight KVM or OpenVZ support. also has enterprise features. Why pay more for Citrix Xen Server when its complete BULLSHIT in my eyes. Sorry.. I dont see the benefits of Citrix Xen Server (KVM based).
XenServer is based on Xen. It's very mature (second only to ESXi) and the only hypervisor that offers full PV and is the choice of the most enterprise of environments (Amazon, Rackspace and IBM clouds.) Xen is super fast, super stable and feature rich. And it is 100% free, even the Citrix packaging of it. I know people here in the community getting 20% performance increase moving from ESXi to Xen, for example. I've been using Xen for over a decade, it's pretty awesome.
If you are building a platform on your own (nothing packaged) Xen / XenServer is where I would start. ESXi I would just ignore, it rarely makes any sense at all. HyperV can be good but mostly for MS shops wanting to stick with a single vendor or people looking for specific features. All other things being equal, Xen is my go to choice due to performance, stability, enterprise support and maturity (and the PV feature rocks.)
KVM is much harder to deal with on your own but is great technology and especially good at Windows workloads (Xen is better at Linux ones) and is better for vendors to handle automation around which is why you often find it in other products.
These days, having used literally everything out there, at @ntg where we've been virtualizing for a long time (more than a decade) on X86 we use a mix of Scale and XenServer. HyperV only for testing and ESXi only when customers request it.
-
@johnhooks said:
@scottalanmiller said:
@johnhooks said:
I want to try GlusterFS with two KVM hosts and see how it works. I've seen a couple people online do it.
not for the faint of heart. I've worked with some huge shops that did this and it can be done, but rarely were they happy about it in the end.
I just read some stuff saying that DRBD is much more reliable. So I'll probably give that up haha. I do want to try ovirt though just to see how it works.
Well yeah, nothing is going to touch DRBD mirroring. It's full on network RAID 1. The difference is that Gluster can scale to massive numbers of nodes. Anything other than DRBD for two hosts would be purely for purposes of experimentation.
-
@scottalanmiller said:
@johnhooks said:
@scottalanmiller said:
@johnhooks said:
I want to try GlusterFS with two KVM hosts and see how it works. I've seen a couple people online do it.
not for the faint of heart. I've worked with some huge shops that did this and it can be done, but rarely were they happy about it in the end.
I just read some stuff saying that DRBD is much more reliable. So I'll probably give that up haha. I do want to try ovirt though just to see how it works.
Well yeah, nothing is going to touch DRBD mirroring. It's full on network RAID 1. The difference is that Gluster can scale to massive numbers of nodes. Anything other than DRBD for two hosts would be purely for purposes of experimentation.
Have you used Ganeti at all?
-
@ntoxicator said:
Right now the DATA Storage is piped through Citrix Xen Server in means of ISCSI LUN and mapped as a Drive associated to the VM. This was not smart on my behalf years ago. I would of been better to just directly attach a LUN right to Windows server using the ISCSI initiator. Everything was a blur 2 years ago when was scrambling to put the build together at the time.
Some thoughts on this bit, knowing that it is ancillary to the main topic (and about to be split to its own...)
- Best Option would be to share out directly from the NAS and never get SAN involved.
- Next best would be NFS or iSCSI to XenServer and then mapped to the VM. This is the "right way" to do it with a VM.
- Direct to the Windows VM is a "no no" both in the virtual space (it should always go through the hypervisor not the guest) and in the Windows world (Windows iSCSI is not the best.)
NFS is always preferred over iSCSI here both from the hypervisor side (XenServer, ESXi and KVM are all NFS natives) and from the NAS side (Synology, ReadyNAS, etc. are all NFS native while iSCSI is a secondary function) and from a design complexity standpoint.
-
@johnhooks said:
Have you used Ganeti at all?
No
-
@ntoxicator - As a non-sales guy from Scale (office of the CTO), I would be happy to set up a webex and go over what we do with HC3 and see if it a fit for you or not. We can dive to whatever level of technical depth you would like to go to.
-
Thank you everyone for all the information.
Still confused as to why local storage being recommended over centralized storage on a NAS?
I suppose I just gave up with Citrix Xen Server at 6.1(free) release. Was still bugs (windows drivers). Also any Linux VM's I install, there is no memory view and does not calculate total node memory usage correctly.
With NFS Storage -- I'm unaware of a practice of where I can attach a disk as LOCAL to a windows server. keep in mind. our primary domain controller has ALL! network shares that are viable to the company. This data rides on data within an iSCSI LUN which is attached to Citrix Xen Server and as a disk tied to the VM.
Might need to split this to a seperate thread on network storage and layout....
-
@ntoxicator said:
Thank you everyone for all the information.
Still confused as to why local storage being recommended over centralized storage on a NAS?
I suppose I just gave up with Citrix Xen Server at 6.1(free) release. Was still bugs (windows drivers). Also any Linux VM's I install, there is no memory view and does not calculate total node memory usage correctly.
With NFS Storage -- I'm unaware of a practice of where I can attach a disk as LOCAL to a windows server. keep in mind. our primary domain controller has ALL! network shares that are viable to the company. This data rides on data within an iSCSI LUN which is attached to Citrix Xen Server and as a disk tied to the VM.
Might need to split this to a seperate thread on network storage and layout....
Why would you want to mount NFS storage locally in Windows? Setup it up as usable storage in XenServer (or whatever hypervisor you pick) and store the virtual hard disk on it. This will look like a local disk to Windows but have the pick up and move where ever you want advantage of just being a file (because it is).
-
NOTE:
Just spoke with folks at Oracle sales, had a conference call to discuss X5-2 servers and specs. Awaiting pricing.
Also noticed ALOT of IBM server X on ebay.. newer ones at that. Not a good sign. Also relates back to how IBM didnt trust their own servers.
-
@ntoxicator said:
Still confused as to why local storage being recommended over centralized storage on a NAS?
Because HA. If you need multiple servers for your VMs to failover, you need multiple for your storage. Storage is more critical and more fragile than the host servers, so it is where you need to focus HA efforts even more. Your VMs are only as safe as your NAS is, and any NAS under $30K isn't as reliable as a cheap normal server. And there are more points of failure, not just riskier ones.
Check out these articles.
http://www.smbitjournal.com/2013/06/the-inverted-pyramid-of-doom/
-
@ntoxicator said:
Thank you everyone for all the information.
Still confused as to why local storage being recommended over centralized storage on a NAS?
Because it's faster, cheaper and more reliable. And with DRBD or Starwind all local storage is in sync, so if one server node goes down, your storage and remaining servers are still up. If your centralised NAS or SAN goes down, all server nodes are down.
-
@ntoxicator said:
With NFS Storage -- I'm unaware of a practice of where I can attach a disk as LOCAL to a windows server. keep in mind. our primary domain controller has ALL! network shares that are viable to the company. This data rides on data within an iSCSI LUN which is attached to Citrix Xen Server and as a disk tied to the VM.
All storage in your VM world should be attached to your host, not to guests. iSCSI lets you do things that you should not be doing here. This is an additional benefit of NFS in this case that it would prevent you from doing bad things.
But if you can present NFS to the VMs, you could present SMB directly to the network and bypass the extra layer gaining speed, simplicity and reliability that way too. So while you would want to use NFS when talking to the VMs / VM host, in this case you would want to bypass that extra step completely.
It is that it is on a LUN now that is limiting you. If it was on a NAS instead of a SAN, you'd have more options.
-
@ntoxicator said:
NOTE:
Just spoke with folks at Oracle sales, had a conference call to discuss X5-2 servers and specs. Awaiting pricing.
Also noticed ALOT of IBM server X on ebay.. newer ones at that. Not a good sign. Also relates back to how IBM didnt trust their own servers.
I am not super surprised, as I was looking at specs on the 3250M5 yesterday and was floored by how outdated they are compared to Dell, HP, SM, etc
-
I'm aware of this - and that is the point I was getting across.
As with iSCSI initiator I COULD attach as local disk and direct connect and take advantage of near full network speed with smaller overhead.
In my opinion. There would be more overhead
ISCSI LUN attached to Xen Hypervisor > VM > attached as local disk. Unless pass-through?
Furthermore. The issue stands.
With the primary data being on the Citrix Xen Server as local disk (iSCSI LUN storage). if I was to migrate to an NFS Stor. Mounted to Xen Server.
I would attach as a NEW disk to that Virtual Machine. Mount it within Windows and format. Then I'll be stuck wit 'xcopy' the data & Permissions over to this new storage drive.
As this is an issue now, as the Citrix Xen server has storage ties to our original Synology 4-bay NAS.
I've been wanting to move ALL our LUN's and data to our newer larger Synology NAS. And then use the original 4-bay as a replication/ back-up
-
@ntoxicator said:
Still confused as to why local storage being recommended over centralized storage on a NAS?
Because a standard NAS isn't any more reliable then a standard server... mostly because they are standard servers with special software thrown on top. Why would you worry about a server node dying but not your storage node?
-
@ntoxicator said:
Also noticed ALOT of IBM server X on ebay.. newer ones at that. Not a good sign. Also relates back to how IBM didnt trust their own servers.
Now that IBM doesn't make or support IBM servers even for customers... the one reason that people had for selecting them is gone.
-
@ntoxicator said:
I'm aware of this - and that is the point I was getting across.
As with iSCSI initiator I COULD attach as local disk and direct connect and take advantage of near full network speed with smaller overhead.
In my opinion. There would be more overhead
Oh absolutely, there is more overhead. But that overhead is trivial, it gets handled in a more reliable way (Linux iSCSI is more reliable than Windows iSCSI and storage is better to the host than the guest and networking has less overhead at the host than at the guest) so this is generally considered to not be a factor at all. But more importantly is fragility and manageability.
What if you need to pause a VM... how will the VM know to tell the SAN to freeze in this way?
-
@ntoxicator said:
I've been wanting to move ALL our LUN's and data to our newer larger Synology NAS. And then use the original 4-bay as a replication/ back-up
Synology is Supermicro gear. It's just a normal server. If you are okay with having a normal lower end enterprise server on which everything rests, why have the other servers at all? Why not go down to a single server for everything? What's the purpose of the additional servers?