Hp storage D2d4324 nfs slow xenserver
-
@mroth911 said in Hp storage D2d4324 nfs slow xenserver:
So basically what your telling me.. For what I am trying to do I am shooting myself in the foot.
Correct, but don't feel bad, every sales person in the world pushes this exact design so you hear it so often that it sounds plausible. I have an article on that, too.
http://www.smbitjournal.com/2016/06/the-emperors-new-storage/
-
@mroth911 said in Hp storage D2d4324 nfs slow xenserver:
@scottalanmiller how would I set up what I am trying to achieve. I want to have cpanel servers with fail over
Let's start with scale. How much server capacity do you need if capacity alone is the factor at hand (does one server handle enough capacity for your needs?) Don't worry about numbers for failover or anything like that... we will tackle that next. How much storage, RAM and rough CPU do you need to get the job done today? And how much do you anticipate in 6 months and two years?
-
@scottalanmiller said in Hp storage D2d4324 nfs slow xenserver:
For XenServer, which has high availability shared storage built in out of the box, you literally need nothing. HA-Lizard is a script that sets things up for you. You can use Starwind for larger scale. HP VSA will work as well. You can use Gluster or CEPH, too.
+++
Also it's a very common misconception in a hyperconverged scenario ALL the nodes should participate in the storage providing. It's not! There may be storage only nodes, compute only and hybrid. In a very different configurations. Pretty much all the HCI vendors can do that, especially if they support common uplinks (NFS, SMB3 etc).
-
So in your expect opinion what would be the proper setup for what I am trying to achieve. I am trying to create a fail over cluster running websevers using cpanel.
-
@mroth911 said in Hp storage D2d4324 nfs slow xenserver:
So in your expect opinion what would be the proper setup for what I am trying to achieve. I am trying to create a fail over cluster running websevers using cpanel.
What you're running as a VM on the hypervisor doesn't effect the design of the hypervisor and fail-over capabilities. (generally)
If you only have the two host, you'll want to configure them in a HA pool.
Then the VM's would be able to live migrate without downtime between the two hosts.
-
@mroth911 said in Hp storage D2d4324 nfs slow xenserver:
So in your expect opinion what would be the proper setup for what I am trying to achieve. I am trying to create a fail over cluster running websevers using cpanel.
Well, that's a little complex so let's delve into it. What processes do we need to protect? In most cases we do high availability at the application layer, not at the platform layer. This is where you get the best reliability. What web servers are you running, what kind of applications are on them and what dependencies (like databases) do they have?
-
@DustinB3403 said in Hp storage D2d4324 nfs slow xenserver:
What you're running as a VM on the hypervisor doesn't effect the design of the hypervisor and fail-over capabilities. (generally)
osts.Sort of..... but only because more than 50% of the time what you run tells you that HA has no function at the platform level at all. Web servers, file servers, active directory and such you generally let do their own HA and you avoid hypervisor HA because it interferes with the HA that is already there.
-
We are running wordpress websites, nothing crazy. using mysql. we are using xenserver. and thats basically it .
The server specs are
hp dl360 g5 dual quad core I believe 32 or 64 gig of ram.
-
@mroth911 said in Hp storage D2d4324 nfs slow xenserver:
We are running wordpress websites, nothing crazy. using mysql. we are using xenserver. and thats basically it .
The server specs are
hp dl360 g5 dual quad core I believe 32 or 64 gig of ram.
MySQL / MariaDB has its own means of doing HA that is more powerful than what the hypervisor can do (HV can only do crash consistent whereas the database itself can do true HA and Fault Tolerance with zero data loss) and making Wordpress highly reliable is just a matter of load balancing the traffic.
So in a case like this, I would not have any HA at any level except for the applications. Just have two (or more) host nodes with zero shared infrastructure (no HA at any level, no shared storage, etc.) and let the applications (Apache and MySQL) do their jobs.
-
SO are you saying... us ha lizard or something like that?
-
-
@mroth911 said in Hp storage D2d4324 nfs slow xenserver:
SO are you saying... us ha lizard or something like that?
No he's not.
If you need to create HA for Active Directory, you buy two servers and install a ADDS server on each. The HA is provided at the software level of Windows, not at the hardware level or hypervisor level.
Scott is telling you that Apache and MySQL can do exactly that - skip the hypervisor HA and only use application HA. You'll also probably want to buy a load balancer to sit infront of these two servers as well.
-
I am starting to get the idea now. Sorry I am very green with this. Before finding out about this website. everyone I spoke to told me I need 3,2,1 concept. Which is what I was starting to do.I am glad I was referred to this website. Let me see if I understand.. I can take to web servers and have them link to each other? with no shared storage ? is that what I am understanding?
-
@Dashrender said in Hp storage D2d4324 nfs slow xenserver:
If you need to create HA for Active Directory, you buy two servers and install a ADDS server on each. The HA is provided at the software level of Windows, not at the hardware level or hypervisor level.
In the case of AD, even higher, actually. The HA is purely within the application, even Windows doesn't know it's HA.
-
@mroth911 said in Hp storage D2d4324 nfs slow xenserver:
Let me see if I understand.. I can take to web servers and have them link to each other? with no shared storage ? is that what I am understanding?
So yes and no. Let's break it down into discrete parts (and tell me if there are more parts that I don't know about, I'm just talking vanilla WordPress right now.)
You have two things that need to be HA here, the application and the database. These are two different things with different needs so we need to talk about them completely separately.
-
Wordpress Application High Availability
WordPress itself is a web application, it relies on a database for its data. WordPress itself is stateless (it does not change under normal operations.) Because of this, there is no need for "nodes" in a WordPress cluster to "talk" to each other. They don't even need to have similarity. You can have three nodes, one with WordPress running on Windows, one with WordPress running on Apache on FreeBSD and one with WordPress running on NGinx on CentOS and they all act the same and you can load balance between them. The stateless nodes have nothing to say to each other so there is no need for them to be the same (other than the WP code itself.)
So making WordPress at the application layer HA is super easy. It is as simple as running two or more instances of it and having your load balancer point them into a pool and send traffic to each as needed and remove any that stop responding from the pool. Easy peasy.
Because it is stateless, you have a few choices for making the different copies of WP identical.
You can...
- Do it by hand
- Use a simple tool like Rsync to take a master and make the others identical to it
- Build each node pristine each time using a tool like Ansible, Chef or Puppet
- Automate using a custom script
And, being stateless, WordPress can safely be made highly available using the hypervisor layer high availability tools as well but this is silly in a case like this because you would give up load balancing to do this which is just throwing away your value. So in this case, this would not make sense.
-
MySQL or MariaDB High Availability
The database portion of your WordPress stack is the critical one. Unlike the stateless application server, the database is stateful - which means that it is constantly in a state of change, is mutable and cannot be protected without knowing its current state. This means that tools like platform layer high availability cannot protect it well because they will treat the database as having crashed and could corrupt it or lose data during a failover. Not ideal. Nor will they allow for load balancing, which we often will not do anyway for the DB, but they eliminate that option.
For the database we need the database applications to speak to each other and keep the database nodes (two or more) synchronized with identical data in both places. We can do that in a master/slave way (aka active/passive) or we can do it in a multi-master way which is far more complex. But this has to be done in the database itself.
-
In both cases above, any shared storage would introduce a single point of failure that does not exist naturally. Without shared storage, each application copy and each database instance has a full copy of the entire application or dataset. So if one fails, another can take over. Zero data loss, zero shared points of failure.
With the 3-2-1 design (or ANY design with shared storage) any storage corruption OR any failure in the storage node causes the entire stack to be lost - there is no high availability or protection of any significance. The HA aspect is completely skipped in that case.
Shared storage also makes load balancing pointless as the most critical component for performance is the piece that is shared so "scaling up" doesn't really do very much as the database delays from the storage will remain the same no matter how many database nodes or application nodes you add. It's like hooking more cars together by tow ropes but still only engaging the engine in the first car. Doesn't make things go faster, just makes the single engine work harder (in many cases.) This is because you are unlikely to be CPU bound in a case like this.
-
So, from a hardware perspective, you would just want two physical servers (or more if you need greater performance than two can provide, but if that is the case consider bigger servers rather than more servers.) If you feel that you need more than two servers, we should talk about scaling. This site, MangoLassi, handles over two million full thread loads per month and over 160 million resource requests (hits) per month on a small fraction of the resources that you are talking about using here. Just for a capacity perspective.
From an operating system perspective, each OS is completely independent and knows nothing about the others, either.
It is the two applications (Apache and MySQL) alone that need their respective layers to be replicated for fault tolerance. No other pieces need to be "cluster aware".
-
This d2d is a paper weight for me. can i install freenas or something like that on this device?