VMware Community Homelabs
-
Where have I said that cloud (like Vultr even) is cheaper than alternatives, except for where the scale is ridiculously small? I have no knowledge of ever having said this.
-
Going back to 2013 I had this pretty laid out. Yes this is looking at private, but mostly applies to public, too.
https://smbitjournal.com/2013/06/when-to-consider-a-private-cloud/
-
@stacksofplates said in VMware Community Homelabs:
@scottalanmiller said in VMware Community Homelabs:
@stacksofplates said in VMware Community Homelabs:
It 100% depends on the type of work. If you're taking in 5 billion requests per day, there's no way it's going to be cheaper on site than in public cloud.
Actually, the size of the load doesn't matter much. If it is 5billion every day, cloud will be the worst option. If it is 5 bil one day and 1 bil the next then back up and all over, that alone is when public cloud competes. No workload becomes public cloud viable based on size, ever. Only on elasticity. Elasticity is the sole benefit to cloud architecture. It's a huge one, but the only one.
Yeah that's just not true. A) You're assuming that compute is the biggest factor here, B) It seems as though you're assuming they are making home grown things like just setting up NGINX proxies to everything.
Specifically the company I was talking about said their biggest cost was data going back out. Which they could easily mitigate with reducing HTTP headers, reducing TLS handshakes, and ended up using DigiCert because that cert was much smaller than the previous one they had which in turn limited outgoing data transfer.
There's a ton that goes into this and you can't just say it's cheaper to buy your own server vs rent a compute space. Things like K8s, "serverless" (whether you like that term or not), hosted ML, etc have completely changed the landscape.
Have they changed the landscape? How is the data going in and out affected? You have no reason to have ingress / egress issues cloud or non-cloud there. Treat them equally, the problems are the same.
-
@scottalanmiller said in VMware Community Homelabs:
@stacksofplates said in VMware Community Homelabs:
@scottalanmiller said in VMware Community Homelabs:
@stacksofplates said in VMware Community Homelabs:
It 100% depends on the type of work. If you're taking in 5 billion requests per day, there's no way it's going to be cheaper on site than in public cloud.
Actually, the size of the load doesn't matter much. If it is 5billion every day, cloud will be the worst option. If it is 5 bil one day and 1 bil the next then back up and all over, that alone is when public cloud competes. No workload becomes public cloud viable based on size, ever. Only on elasticity. Elasticity is the sole benefit to cloud architecture. It's a huge one, but the only one.
Yeah that's just not true. A) You're assuming that compute is the biggest factor here, B) It seems as though you're assuming they are making home grown things like just setting up NGINX proxies to everything.
Specifically the company I was talking about said their biggest cost was data going back out. Which they could easily mitigate with reducing HTTP headers, reducing TLS handshakes, and ended up using DigiCert because that cert was much smaller than the previous one they had which in turn limited outgoing data transfer.
There's a ton that goes into this and you can't just say it's cheaper to buy your own server vs rent a compute space. Things like K8s, "serverless" (whether you like that term or not), hosted ML, etc have completely changed the landscape.
Have they changed the landscape? How is the data going in and out affected? You have no reason to have ingress / egress issues cloud or non-cloud there. Treat them equally, the problems are the same.
Eff it did the sunglasses again.
I don't get what you mean. You're charged for data going out but not coming in. The biggest cost they had was data going back to clients. So they minimized the data by the methods I mentioned.
-
@stacksofplates said in VMware Community Homelabs:
I don't get what you mean. You're charged for data going out but not coming in.
We aren't charged for either. I don't understand where you are getting this assumed cost.
-
@stacksofplates said in VMware Community Homelabs:
The biggest cost they had was data going back to clients. So they minimized the data by the methods I mentioned.
I get that, but couldn't they have done that equally cloud or alternative?
-
@stacksofplates said in VMware Community Homelabs:
Eff it did the sunglasses again.
It loves doing those.
-
@stacksofplates said in VMware Community Homelabs:
There's a ton that goes into this and you can't just say it's cheaper to buy your own server vs rent a compute space. Things like K8s, "serverless" (whether you like that term or not), hosted ML, etc have completely changed the landscape.
These are cool techs, for sure. But if you need these, and you get beyond a basic scale (cloud is generally cheaper at the itty bitty scales before you get to the size of a single server) then you can implement your own at lower cost.
The "pre-built" value of most cloud providers means that there is a barrier to entry for non-cloud options. But once you breach that limit, cloud is rarely cost effective. It just gets more and more expensive, unless you need the elastic capacity and enough elastic capacity to overcome the cost difference, which is often pretty tough to do.
I've worked in some pretty large, pretty elastic environments and while they went cloud in many cases, they ended up paying a premium for it once all was said and done.
-
@scottalanmiller said in VMware Community Homelabs:
@stacksofplates said in VMware Community Homelabs:
I don't get what you mean. You're charged for data going out but not coming in.
We aren't charged for either. I don't understand where you are getting this assumed cost.
AWS charges for egress through their gateways.
-
@scottalanmiller said in VMware Community Homelabs:
@stacksofplates said in VMware Community Homelabs:
There's a ton that goes into this and you can't just say it's cheaper to buy your own server vs rent a compute space. Things like K8s, "serverless" (whether you like that term or not), hosted ML, etc have completely changed the landscape.
These are cool techs, for sure. But if you need these, and you get beyond a basic scale (cloud is generally cheaper at the itty bitty scales before you get to the size of a single server) then you can implement your own at lower cost.
The "pre-built" value of most cloud providers means that there is a barrier to entry for non-cloud options. But once you breach that limit, cloud is rarely cost effective. It just gets more and more expensive, unless you need the elastic capacity and enough elastic capacity to overcome the cost difference, which is often pretty tough to do.
I've worked in some pretty large, pretty elastic environments and while they went cloud in many cases, they ended up paying a premium for it once all was said and done.
So all of those things assume elastic capacity. No one buys 500 ec2 instances and runs them forever unless that's the min required to meet their need.
Everything has the possibility of being elastic. That's kind of my point. Small or large you benefit from being elastic. But it also depends on the workload. Some are much harder to make elastic even though it's possible. So that's why I began with it depends on the workload.
Yeah and that premium was most likely cheaper than trying to run self hosted with just VMs.
-
Aside from @scottalanmiller who else on here has a homelab ?
ps, before you run to your keyboard, spare me the response, running a hypervisor on your laptop is NOT a home lab!
-
@FATeknollogee said in VMware Community Homelabs:
Aside from @scottalanmiller who else on here has a homelab ?
ps, before you run to your keyboard, spare me the response, running a hypervisor on your laptop is NOT a home lab!
I got rid of my DL380 but I have an R710 running. I also have a micro form factor optiplex that I run my containers on. Hopefully going to replace my r710 with it.
I also run stuff on my laptop. KVM on your laptop is exactly the same as KVM on a rack server. It definitely counts as a home lab.
-
If you're using libvirt the networking is exactly the same also. It just doesn't work super well with wireless.
-
@stacksofplates said in VMware Community Homelabs:
@scottalanmiller said in VMware Community Homelabs:
@stacksofplates said in VMware Community Homelabs:
There's a ton that goes into this and you can't just say it's cheaper to buy your own server vs rent a compute space. Things like K8s, "serverless" (whether you like that term or not), hosted ML, etc have completely changed the landscape.
These are cool techs, for sure. But if you need these, and you get beyond a basic scale (cloud is generally cheaper at the itty bitty scales before you get to the size of a single server) then you can implement your own at lower cost.
The "pre-built" value of most cloud providers means that there is a barrier to entry for non-cloud options. But once you breach that limit, cloud is rarely cost effective. It just gets more and more expensive, unless you need the elastic capacity and enough elastic capacity to overcome the cost difference, which is often pretty tough to do.
I've worked in some pretty large, pretty elastic environments and while they went cloud in many cases, they ended up paying a premium for it once all was said and done.
So all of those things assume elastic capacity. No one buys 500 ec2 instances and runs them forever unless that's the min required to meet their need.
Everything has the possibility of being elastic. That's kind of my point. Small or large you benefit from being elastic. But it also depends on the workload. Some are much harder to make elastic even though it's possible. So that's why I began with it depends on the workload.
Yeah and that premium was most likely cheaper than trying to run self hosted with just VMs.
Everything has some elasticity, sure. I agree. But most workloads, let's just use email as an example, while elastic is "nominally elastic". Out of hundreds of clients, each with multiple workloads, you'll be lucky to find one or two with a single workload with enough elasticity to bother justifying an elastic service, even if it was otherwise cost effective. But once you mix it into other workloads, the elasticity might not have any benefit.
What I mean is... let's take your nominally elastic workloads that have no way to justify cloud computing elasticity. They create a paid-for infrastructure. Then your elastic workloads will, on average, fit onto the spare capacity created by the mechanism making the elastic services a loss from a cloud perspective.
From what I've seen, for elastic services to be beneficial, it has to not only be a dramatic amount of elasticity, but also a large portion of your overall workload capacity.
-
@FATeknollogee said in VMware Community Homelabs:
Aside from @scottalanmiller who else on here has a homelab ?
ps, before you run to your keyboard, spare me the response, running a hypervisor on your laptop is NOT a home lab!
I have a Raspberry Pi running.
-
@Obsolesce said in VMware Community Homelabs:
@FATeknollogee said in VMware Community Homelabs:
Aside from @scottalanmiller who else on here has a homelab ?
ps, before you run to your keyboard, spare me the response, running a hypervisor on your laptop is NOT a home lab!
I have a Raspberry Pi running.
That's cute!
-
@Obsolesce said in VMware Community Homelabs:
@FATeknollogee said in VMware Community Homelabs:
Aside from @scottalanmiller who else on here has a homelab ?
ps, before you run to your keyboard, spare me the response, running a hypervisor on your laptop is NOT a home lab!
I have a Raspberry Pi running.
Better than a laptop.
-
@FATeknollogee said in VMware Community Homelabs:
@Obsolesce said in VMware Community Homelabs:
@FATeknollogee said in VMware Community Homelabs:
Aside from @scottalanmiller who else on here has a homelab ?
ps, before you run to your keyboard, spare me the response, running a hypervisor on your laptop is NOT a home lab!
I have a Raspberry Pi running.
That's cute!
I do too, as an actual server
In addition to a dozen big rack mounts.
-
@FATeknollogee said in VMware Community Homelabs:
Aside from @scottalanmiller who else on here has a homelab ?
ps, before you run to your keyboard, spare me the response, running a hypervisor on your laptop is NOT a home lab!
I have a r710 server that I use has a homelab for long term lab setups. And I also use my home desktop that has 32 GB of RAM for some random VM setups.
-
@FATeknollogee said in VMware Community Homelabs:
@Obsolesce said in VMware Community Homelabs:
@FATeknollogee said in VMware Community Homelabs:
Aside from @scottalanmiller who else on here has a homelab ?
ps, before you run to your keyboard, spare me the response, running a hypervisor on your laptop is NOT a home lab!
I have a Raspberry Pi running.
That's cute!
I know right!?