New Infrastructure to Replace Scale Cluster
-
Bankruptcy is always an option. . .
Seriously. If the options are so starkly horrible.
-
@DustinB3403 said in New Infrastructure to Replace Scale Cluster:
Bankruptcy is always an option. . .
Seriously. If the options are so starkly horrible.
It is, but that's a bit extreme. If his contracts cover the costs, then you just ride it out. Then look for profit opportunities after that is done.
-
@DustinB3403 said in New Infrastructure to Replace Scale Cluster:
Bankruptcy is always an option. . .
Seriously. If the options are so starkly horrible.
Don’t, ever, think about running your own business. You are clueless.
-
@scottalanmiller said in New Infrastructure to Replace Scale Cluster:
@IRJ said in New Infrastructure to Replace Scale Cluster:
Right. Maybe a loan would make sense here since contract is in place, and he could reduce his monthly cost and make more margin. Then sell scale once everything is migrated over.
Maybe, but honestly I doubt it. He's stuck with X amount of stuff, and it is a lot. He's got enough hardware to probably ride out his obligations at least until the fiber lines expire. If not, he can at least hold off a purchase for a few years until said purchase is way smaller than it would be today (and time to budget for it.)
After he moves from Scale to the random R510s or whatever, he might sell the Scale and make a small amount on that, too. He just can't do that first. Then he could bank that small amount towards hardware upgrades in three years or whatever.
He can't liquidate the building, generators, HVAC, etc. I suspect. And he can't turn off the lines.
To do anything, he's have to maintain the costs that he has today. To do anything other than using the hardware that he has, he'd have to either buy new hardware or lease cloud space or whatever. That's all more cost.
I think he is probably right, past decisions have left him needing to use what he has for the time being. There's no way to recoup that cost until the lines expire.
Right with the new information, the best thing here is to get the Scale unit under support contract for the remainder of the fiber contract term.
Or, if Scale offers, look at the single incident support costs to repair things as (if) they fail over the rest of the term of the fiber contract.
One of those options are your only real choice aside from spinning up new servers in a cluster mode designed for web hosting.
-
Here is my environment and what I would like to be able to do. I have custom made app that I made similar to wp-engine. That when I get a new client on Wordpress, I spin up the vm and setup them up and there up and running. I am in the process of build my OWN Whmcs, I want to be able to spin up vm from my website just like Linode does.
I am currently running over 45 vm's. Cpanel, Custom VM for Wordpress, DC's , CRM, Jira, Billing, Clients custom EHR. PBX.
Correction I have 42 vm's running
tb
Scale specs are 24 cores, 188gb of ram 10tb. -
@mroth911 said in New Infrastructure to Replace Scale Cluster:
Here is my environment and what I would like to be able to do. I have custom made app that I made similar to wp-engine. That when I get a new client on Wordpress, I spin up the vm and setup them up and there up and running.
So dedicated VM per client. Semi-standard model, makes sense. But different than we were imagining with the mention of web hosting.
So in this design, each VM is a self contained "per user" ecosystem of database, web server, etc.
-
@scottalanmiller Correct
-
@mroth911 said in New Infrastructure to Replace Scale Cluster:
. I am in the process of build my OWN Whmcs...
Like this?
-
@mroth911 said in New Infrastructure to Replace Scale Cluster:
Scale specs are 24 cores, 188gb of ram 10tb.
This helps a lot, thanks. Especially the drive capacity.
Are you thinking that you will yank the drives from the Scale and put them into the non-Scale hosts? Or are those well situated with drives already?
-
Yes. My billing system is all automated so every cpanel server can suspend users etc.
However HTML websites are only on cpanel, Wordpress sites are on a custom build through linux running hhvm, ubuntu, Varnish
-
Your list looks like it is almost entirely Linux, I'm guessing. But I see DCs mentioned, those are Windows AD DCs? Is that the only Windows workloads, or are there more?
-
Yeah I am mostly all linux. Not a fan of Microsoft. the DC's Were some of my old computers that I had on a domain that I just haven't had time to migrate stuff. and demote the dC. also I try to start current in windows but there is not enough time. I am running server 2012.. Which I originally figured I would create 3 hyper v servers. that turned into a cluster nightmare for me.
-
@mroth911 said in New Infrastructure to Replace Scale Cluster:
Yeah I am mostly all linux. Not a fan of Microsoft. the DC's Were some of my old computers that I had on a domain that I just haven't had time to migrate stuff. and demote the dC. also I try to start current in windows but there is not enough time. I am running server 2012.. Which I originally figured I would create 3 hyper v servers. that turned into a cluster nightmare for me.
So if I can assume that in this move that you can either... 1) finish the demotion and eliminate the Windows machines or 2) leave them behind on the Scale HC3 to deal with later...
Then I'd recommend looking into LXC containers (with LXD front end, just makes it easy.) It might be so fast and easy to automate that you want to go this route.
An oVirt / KVM / Gluster cluster could work here, but feels heavy. But it might be the simplest to set up (without throwing money at it.)
But long term, LXC will give you more capacity and what I feel is an easier time automating things.
But the oVirt path has built in failover if you go with Gluster, DRBD or CEPH. Whereas with LXC you are a bit more on your own for that. But doing really rapid recovery might be trivial to script. But still, a little "making it yourself."
-
LXC is "Linux only", in case that wasn't clear as a limitation.
-
@scottalanmiller said in New Infrastructure to Replace Scale Cluster:
@mroth911 said in New Infrastructure to Replace Scale Cluster:
. I am in the process of build my OWN Whmcs...
Like this?
This bit makes me think that LXC might be a really good choice for you. You don't necessarily even need a cluster in the traditional sense. Each LXC node could be stand alone and you could build "simple" logic into your WHMCS clone that looks at average or peak loads and chooses the "least" used node for the next deployment.
-
LXD does support clustering with failover, if you use CEPH, Gluster, etc.
-
Now using CEPH or Gluster might not prove to be worth it. Local RAID is normally faster and easier during operational times, just not as nice during a failure.
But sometimes "simply, well understood, and easy to support" matter more than automated failover. It is worth considering.
With solid local RAID and LXD management of the nodes, you could just have a good backup and restore system to get a failed node or single VM back up and running in the event of a big failure.
-
@scottalanmiller said in New Infrastructure to Replace Scale Cluster:
@mroth911 said in New Infrastructure to Replace Scale Cluster:
Scale specs are 24 cores, 188gb of ram 10tb.
Sorry, not being familiar at all with Scale, what does it mean? Are the cores/RAM/storage in one node or in several and this is the config for each node or the total amount of cores in the cluster??
-
@Pete-S From my understanding this is bundled together. This is the total resources that you can use.
@scottalanmiller I was on the phone with redhat today about getting Redhat subscriptions with Virtual manager
-
@Pete-S said in New Infrastructure to Replace Scale Cluster:
@scottalanmiller said in New Infrastructure to Replace Scale Cluster:
@mroth911 said in New Infrastructure to Replace Scale Cluster:
Scale specs are 24 cores, 188gb of ram 10tb.
Sorry, not being familiar at all with Scale, what does it mean? Are the cores/RAM/storage in one node or in several and this is the config for each node or the total amount of cores in the cluster??
That's his "cluster spec". He has 1150 nodes if I remember. They are single CPU nodes.
So this is 3x Dell R310 servers with 1x 8 core Intel CPU, 64GB RAM, and 3.3TB of storage each.