VMware Community Homelabs
-
I find it funny for years @scottalanmiller has said that companies should be using cloud because of the multitude of benefits like security, cost, reliability, etc and has recently seemed to be backpedaling.
-
@stacksofplates said in VMware Community Homelabs:
@scottalanmiller said in VMware Community Homelabs:
@stacksofplates said in VMware Community Homelabs:
However if you're running large data lakes and using ML you're not running your own infrastructure.
Actually we are testing moving ML to our own infrastructure because it'll be cheaper.
So clearly I meant machine learning because I was referencing a data lake. Mangolassi doesn't talk to any data lakes as far as I know.
Oh sorry, really thought you were using ML as an example case.
-
@stacksofplates said in VMware Community Homelabs:
I find it funny for years @scottalanmiller has said that companies should be using cloud because of the multitude of benefits like security, cost, reliability, etc and has recently seemed to be backpedaling.
Security, yes. Hosted, yes. But not cloud computing. Cost is higher, obviously. I've written about that a lot.
You are using my discussions about using SaaS to discuss an IaaS lab. Clearly out of context and the opposite of backpeddling.
-
@scottalanmiller said in VMware Community Homelabs:
@stacksofplates said in VMware Community Homelabs:
I find it funny for years @scottalanmiller has said that companies should be using cloud because of the multitude of benefits like security, cost, reliability, etc and has recently seemed to be backpedaling.
Security, yes. Hosted, yes. But not cloud computing. Cost is higher, obviously. I've written about that a lot.
Umm you've clearly stated multiple times that places like Vultr are "cloud". Just because you treat it as a VPS doesn't mean it isn't cloud.
-
The primary value to something like Office 365 is Microsoft's expertise in their own products. That doesn't apply to what we are talking about here.
-
@stacksofplates said in VMware Community Homelabs:
@scottalanmiller said in VMware Community Homelabs:
@stacksofplates said in VMware Community Homelabs:
I find it funny for years @scottalanmiller has said that companies should be using cloud because of the multitude of benefits like security, cost, reliability, etc and has recently seemed to be backpedaling.
Security, yes. Hosted, yes. But not cloud computing. Cost is higher, obviously. I've written about that a lot.
Umm you've clearly stated multiple times that places like Vultr are "cloud". Just because you treat it as a VPS doesn't mean it isn't cloud.
It is, and I'm saying here that it costs more. I'm not arguing that Vultr isn't cloud, I'm pointing out what should be well known that for workloads of any scale, it's more costly.
Hence why we are looking to move some off of it.
-
Where have I said that cloud (like Vultr even) is cheaper than alternatives, except for where the scale is ridiculously small? I have no knowledge of ever having said this.
-
Going back to 2013 I had this pretty laid out. Yes this is looking at private, but mostly applies to public, too.
https://smbitjournal.com/2013/06/when-to-consider-a-private-cloud/
-
@stacksofplates said in VMware Community Homelabs:
@scottalanmiller said in VMware Community Homelabs:
@stacksofplates said in VMware Community Homelabs:
It 100% depends on the type of work. If you're taking in 5 billion requests per day, there's no way it's going to be cheaper on site than in public cloud.
Actually, the size of the load doesn't matter much. If it is 5billion every day, cloud will be the worst option. If it is 5 bil one day and 1 bil the next then back up and all over, that alone is when public cloud competes. No workload becomes public cloud viable based on size, ever. Only on elasticity. Elasticity is the sole benefit to cloud architecture. It's a huge one, but the only one.
Yeah that's just not true. A) You're assuming that compute is the biggest factor here, B) It seems as though you're assuming they are making home grown things like just setting up NGINX proxies to everything.
Specifically the company I was talking about said their biggest cost was data going back out. Which they could easily mitigate with reducing HTTP headers, reducing TLS handshakes, and ended up using DigiCert because that cert was much smaller than the previous one they had which in turn limited outgoing data transfer.
There's a ton that goes into this and you can't just say it's cheaper to buy your own server vs rent a compute space. Things like K8s, "serverless" (whether you like that term or not), hosted ML, etc have completely changed the landscape.
Have they changed the landscape? How is the data going in and out affected? You have no reason to have ingress / egress issues cloud or non-cloud there. Treat them equally, the problems are the same.
-
@scottalanmiller said in VMware Community Homelabs:
@stacksofplates said in VMware Community Homelabs:
@scottalanmiller said in VMware Community Homelabs:
@stacksofplates said in VMware Community Homelabs:
It 100% depends on the type of work. If you're taking in 5 billion requests per day, there's no way it's going to be cheaper on site than in public cloud.
Actually, the size of the load doesn't matter much. If it is 5billion every day, cloud will be the worst option. If it is 5 bil one day and 1 bil the next then back up and all over, that alone is when public cloud competes. No workload becomes public cloud viable based on size, ever. Only on elasticity. Elasticity is the sole benefit to cloud architecture. It's a huge one, but the only one.
Yeah that's just not true. A) You're assuming that compute is the biggest factor here, B) It seems as though you're assuming they are making home grown things like just setting up NGINX proxies to everything.
Specifically the company I was talking about said their biggest cost was data going back out. Which they could easily mitigate with reducing HTTP headers, reducing TLS handshakes, and ended up using DigiCert because that cert was much smaller than the previous one they had which in turn limited outgoing data transfer.
There's a ton that goes into this and you can't just say it's cheaper to buy your own server vs rent a compute space. Things like K8s, "serverless" (whether you like that term or not), hosted ML, etc have completely changed the landscape.
Have they changed the landscape? How is the data going in and out affected? You have no reason to have ingress / egress issues cloud or non-cloud there. Treat them equally, the problems are the same.
Eff it did the sunglasses again.
I don't get what you mean. You're charged for data going out but not coming in. The biggest cost they had was data going back to clients. So they minimized the data by the methods I mentioned.
-
@stacksofplates said in VMware Community Homelabs:
I don't get what you mean. You're charged for data going out but not coming in.
We aren't charged for either. I don't understand where you are getting this assumed cost.
-
@stacksofplates said in VMware Community Homelabs:
The biggest cost they had was data going back to clients. So they minimized the data by the methods I mentioned.
I get that, but couldn't they have done that equally cloud or alternative?
-
@stacksofplates said in VMware Community Homelabs:
Eff it did the sunglasses again.
It loves doing those.
-
@stacksofplates said in VMware Community Homelabs:
There's a ton that goes into this and you can't just say it's cheaper to buy your own server vs rent a compute space. Things like K8s, "serverless" (whether you like that term or not), hosted ML, etc have completely changed the landscape.
These are cool techs, for sure. But if you need these, and you get beyond a basic scale (cloud is generally cheaper at the itty bitty scales before you get to the size of a single server) then you can implement your own at lower cost.
The "pre-built" value of most cloud providers means that there is a barrier to entry for non-cloud options. But once you breach that limit, cloud is rarely cost effective. It just gets more and more expensive, unless you need the elastic capacity and enough elastic capacity to overcome the cost difference, which is often pretty tough to do.
I've worked in some pretty large, pretty elastic environments and while they went cloud in many cases, they ended up paying a premium for it once all was said and done.
-
@scottalanmiller said in VMware Community Homelabs:
@stacksofplates said in VMware Community Homelabs:
I don't get what you mean. You're charged for data going out but not coming in.
We aren't charged for either. I don't understand where you are getting this assumed cost.
AWS charges for egress through their gateways.
-
@scottalanmiller said in VMware Community Homelabs:
@stacksofplates said in VMware Community Homelabs:
There's a ton that goes into this and you can't just say it's cheaper to buy your own server vs rent a compute space. Things like K8s, "serverless" (whether you like that term or not), hosted ML, etc have completely changed the landscape.
These are cool techs, for sure. But if you need these, and you get beyond a basic scale (cloud is generally cheaper at the itty bitty scales before you get to the size of a single server) then you can implement your own at lower cost.
The "pre-built" value of most cloud providers means that there is a barrier to entry for non-cloud options. But once you breach that limit, cloud is rarely cost effective. It just gets more and more expensive, unless you need the elastic capacity and enough elastic capacity to overcome the cost difference, which is often pretty tough to do.
I've worked in some pretty large, pretty elastic environments and while they went cloud in many cases, they ended up paying a premium for it once all was said and done.
So all of those things assume elastic capacity. No one buys 500 ec2 instances and runs them forever unless that's the min required to meet their need.
Everything has the possibility of being elastic. That's kind of my point. Small or large you benefit from being elastic. But it also depends on the workload. Some are much harder to make elastic even though it's possible. So that's why I began with it depends on the workload.
Yeah and that premium was most likely cheaper than trying to run self hosted with just VMs.
-
Aside from @scottalanmiller who else on here has a homelab ?
ps, before you run to your keyboard, spare me the response, running a hypervisor on your laptop is NOT a home lab!
-
@FATeknollogee said in VMware Community Homelabs:
Aside from @scottalanmiller who else on here has a homelab ?
ps, before you run to your keyboard, spare me the response, running a hypervisor on your laptop is NOT a home lab!
I got rid of my DL380 but I have an R710 running. I also have a micro form factor optiplex that I run my containers on. Hopefully going to replace my r710 with it.
I also run stuff on my laptop. KVM on your laptop is exactly the same as KVM on a rack server. It definitely counts as a home lab.
-
If you're using libvirt the networking is exactly the same also. It just doesn't work super well with wireless.
-
@stacksofplates said in VMware Community Homelabs:
@scottalanmiller said in VMware Community Homelabs:
@stacksofplates said in VMware Community Homelabs:
There's a ton that goes into this and you can't just say it's cheaper to buy your own server vs rent a compute space. Things like K8s, "serverless" (whether you like that term or not), hosted ML, etc have completely changed the landscape.
These are cool techs, for sure. But if you need these, and you get beyond a basic scale (cloud is generally cheaper at the itty bitty scales before you get to the size of a single server) then you can implement your own at lower cost.
The "pre-built" value of most cloud providers means that there is a barrier to entry for non-cloud options. But once you breach that limit, cloud is rarely cost effective. It just gets more and more expensive, unless you need the elastic capacity and enough elastic capacity to overcome the cost difference, which is often pretty tough to do.
I've worked in some pretty large, pretty elastic environments and while they went cloud in many cases, they ended up paying a premium for it once all was said and done.
So all of those things assume elastic capacity. No one buys 500 ec2 instances and runs them forever unless that's the min required to meet their need.
Everything has the possibility of being elastic. That's kind of my point. Small or large you benefit from being elastic. But it also depends on the workload. Some are much harder to make elastic even though it's possible. So that's why I began with it depends on the workload.
Yeah and that premium was most likely cheaper than trying to run self hosted with just VMs.
Everything has some elasticity, sure. I agree. But most workloads, let's just use email as an example, while elastic is "nominally elastic". Out of hundreds of clients, each with multiple workloads, you'll be lucky to find one or two with a single workload with enough elasticity to bother justifying an elastic service, even if it was otherwise cost effective. But once you mix it into other workloads, the elasticity might not have any benefit.
What I mean is... let's take your nominally elastic workloads that have no way to justify cloud computing elasticity. They create a paid-for infrastructure. Then your elastic workloads will, on average, fit onto the spare capacity created by the mechanism making the elastic services a loss from a cloud perspective.
From what I've seen, for elastic services to be beneficial, it has to not only be a dramatic amount of elasticity, but also a large portion of your overall workload capacity.