Hp storage D2d4324 nfs slow xenserver
-
Wordpress Application High Availability
WordPress itself is a web application, it relies on a database for its data. WordPress itself is stateless (it does not change under normal operations.) Because of this, there is no need for "nodes" in a WordPress cluster to "talk" to each other. They don't even need to have similarity. You can have three nodes, one with WordPress running on Windows, one with WordPress running on Apache on FreeBSD and one with WordPress running on NGinx on CentOS and they all act the same and you can load balance between them. The stateless nodes have nothing to say to each other so there is no need for them to be the same (other than the WP code itself.)
So making WordPress at the application layer HA is super easy. It is as simple as running two or more instances of it and having your load balancer point them into a pool and send traffic to each as needed and remove any that stop responding from the pool. Easy peasy.
Because it is stateless, you have a few choices for making the different copies of WP identical.
You can...
- Do it by hand
- Use a simple tool like Rsync to take a master and make the others identical to it
- Build each node pristine each time using a tool like Ansible, Chef or Puppet
- Automate using a custom script
And, being stateless, WordPress can safely be made highly available using the hypervisor layer high availability tools as well but this is silly in a case like this because you would give up load balancing to do this which is just throwing away your value. So in this case, this would not make sense.
-
MySQL or MariaDB High Availability
The database portion of your WordPress stack is the critical one. Unlike the stateless application server, the database is stateful - which means that it is constantly in a state of change, is mutable and cannot be protected without knowing its current state. This means that tools like platform layer high availability cannot protect it well because they will treat the database as having crashed and could corrupt it or lose data during a failover. Not ideal. Nor will they allow for load balancing, which we often will not do anyway for the DB, but they eliminate that option.
For the database we need the database applications to speak to each other and keep the database nodes (two or more) synchronized with identical data in both places. We can do that in a master/slave way (aka active/passive) or we can do it in a multi-master way which is far more complex. But this has to be done in the database itself.
-
In both cases above, any shared storage would introduce a single point of failure that does not exist naturally. Without shared storage, each application copy and each database instance has a full copy of the entire application or dataset. So if one fails, another can take over. Zero data loss, zero shared points of failure.
With the 3-2-1 design (or ANY design with shared storage) any storage corruption OR any failure in the storage node causes the entire stack to be lost - there is no high availability or protection of any significance. The HA aspect is completely skipped in that case.
Shared storage also makes load balancing pointless as the most critical component for performance is the piece that is shared so "scaling up" doesn't really do very much as the database delays from the storage will remain the same no matter how many database nodes or application nodes you add. It's like hooking more cars together by tow ropes but still only engaging the engine in the first car. Doesn't make things go faster, just makes the single engine work harder (in many cases.) This is because you are unlikely to be CPU bound in a case like this.
-
So, from a hardware perspective, you would just want two physical servers (or more if you need greater performance than two can provide, but if that is the case consider bigger servers rather than more servers.) If you feel that you need more than two servers, we should talk about scaling. This site, MangoLassi, handles over two million full thread loads per month and over 160 million resource requests (hits) per month on a small fraction of the resources that you are talking about using here. Just for a capacity perspective.
From an operating system perspective, each OS is completely independent and knows nothing about the others, either.
It is the two applications (Apache and MySQL) alone that need their respective layers to be replicated for fault tolerance. No other pieces need to be "cluster aware".
-
This d2d is a paper weight for me. can i install freenas or something like that on this device?
-
@mroth911 said in Hp storage D2d4324 nfs slow xenserver:
This d2d is a paper weight for me. can i install freenas or something like that on this device?
why hurt your situation with freeNSA? just put some flavor of Linux on it you like and share the space out!
-
@mroth911 said in Hp storage D2d4324 nfs slow xenserver:
This d2d is a paper weight for me. can i install freenas or something like that on this device?
Avoid FreeNAS but "something" might be good. FreeBSD, OpenSuse, CentOS or Ubuntu. I would expect that you can, but you are into totally unsupported territory and trying to treat a device purchased to be a blackbox as a whitebox. So you are left with a hobby class device, at best. Is there a good reason to not just scrap it? It's a spent device.
-
What do you mean by spend device , I need help setting up HA for my web hosting system. I want to be able to fill up my servers that i have for web hosting, Also have automation as well. When clients purchase hosting. there domain/account auto provisions. Right now I am using cpanel with WHM. So my orignal thought was to have a nas/san house my vm's and connect the nodes to the nas/san. Now reading here, and learning that is not a good idea. So I want to try to reuse the d2d and not throw it away completely,
-
@mroth911 What, exactly, is the make/model of this d2d device?
-
@mroth911 said in Hp storage D2d4324 nfs slow xenserver:
What do you mean by spend device , I need help setting up HA for my web hosting system.
It's a device that depends on its black box nature and support from the vendor to be useful. It no longer has that and is now a useless device in a business setting, at least for production use.
-
@mroth911 said in Hp storage D2d4324 nfs slow xenserver:
I want to be able to fill up my servers that i have for web hosting,
So that would be production use, definition don't let this device be considered for anything other than archival or backup usage. And even there, I'd be wary.
-
@mroth911 said in Hp storage D2d4324 nfs slow xenserver:
So I want to try to reuse the d2d and not throw it away completely,
Why not just throw it away? It's a spent device, time to be recycled for scrap. We regularly throw out for recycling devices that are far more useful. This falls below what I would bother using even at home, although many would use it there. It's costly to run, problematic to maintain.
If the only intended usage is as a backup target, then perhaps FreeBSD would make sense. But think carefully about anything that has you depending on an unsupported device. What if it dies, what do you do?
-
@travisdh1 Hp storage D2d4324
-
OK
-
As a non-inline, backup or archival unit, I would trust this system if you get it working nicely. Likely FreeBSD or OpenSuse will be ideal. If you get it running in that capacity, then sending backups to it will be a great use of it. As long as it is not a dependency for any running system, it would be viable.