VMware Community Homelabs
-
@scottalanmiller said in VMware Community Homelabs:
@Obsolesce said in VMware Community Homelabs:
@Pete-S I know that Microsoft eats it's own dog food. Can't speak for the others.
Right, but to them it is a private cloud, not public.
Yes, a private space of their public cloud platform.
-
@Obsolesce said in VMware Community Homelabs:
@scottalanmiller said in VMware Community Homelabs:
@Obsolesce said in VMware Community Homelabs:
@Pete-S I know that Microsoft eats it's own dog food. Can't speak for the others.
Right, but to them it is a private cloud, not public.
Yes, a private space of their public cloud platform.
right. it's their own dog food, but doesn't act like a public cloud.
-
@Pete-S said in VMware Community Homelabs:
@stacksofplates said in VMware Community Homelabs:
@Pete-S said in VMware Community Homelabs:
@Obsolesce said in VMware Community Homelabs:
@FATeknollogee said in VMware Community Homelabs:
Why don't we see this from the other Type-1 groups?
One thing the VMware "community" is very good at is grass roots movement.
From @lamw on Twitter:
https://www.virtuallyghetto.com/2020/02/vmware-community-homelabs-project.htmlBecause there is thing thing called the cloud, which is way cheaper and way more beneficial and efficient than spending thousands on hardware upfront.
Whatever it is, excluding some very specific scenarios, I'd rather learn and build it in Azure or AWS... You know, Two birds one stone.
The cloud is flexible but not cheap. It's only cheaper if your needs are very small or very dynamic. If you need performance or storage, cloud instances will get proportionally even more expensive.
And they know it of course. That's why they expand their services to make it harder to move the workload to another cloud provider or your own datacenter.
It 100% depends on the type of work. If you're taking in 5 billion requests per day, there's no way it's going to be cheaper on site than in public cloud.
Even with CFD type work where you'd normally have a cluster. You have to weigh the capex of the servers and the ongoing cost of the maintenance and utilities vs just spinning up 10-20 nodes when you need it. However if you get into long running solves like 6 months or so, then it might be cheaper on site. It really depends on the work load.
I think you're wrong. 5 billion hits per day is Google type traffic (a couple of years ago). And Google don't use the public cloud, they use their own servers. As do Facebook, Amazon, Ebay, Microsoft etc. People like Backblaze don't use the cloud.
The one company I know that I would expect to run their own servers but don't, are Netflix. They are on AWS. LinkedIn are also moving away from their own servers but that's not surprising since Microsoft owns them. I'm not sure they are actually running on Azure. It could be that they are just using Microsoft servers instead of their own.
Finance can calculate what's best but just because you're owning your server park doesn't mean you have to pay for it up front. It doesn't mean you don't have geo-redundancy or that it's all in one place. It doesn't mean you have to employ people that swaps hardware 24/7. And it doesn't mean you can't use cloud servers when you need.
5 billion hits per day is Google type traffic
No it's not. The specific company I'm referencing is a small company with around 35 employees. There's no way to cost effectively do that with on prem servers unless you have thousands of employees. That's the only reason places like Ebay, Facebook, etc are doing that. And Facebook's scale is just astronomical. They design and make their own racks, you can't even compare them to a normal company.
Sure Backblaze may not use them but that's a completely different use case as I said. However if you're running large data lakes and using ML you're not running your own infrastructure.
-
@stacksofplates said in VMware Community Homelabs:
However if you're running large data lakes and using ML you're not running your own infrastructure.
Actually we are testing moving ML to our own infrastructure because it'll be cheaper.
-
@scottalanmiller said in VMware Community Homelabs:
@stacksofplates said in VMware Community Homelabs:
It 100% depends on the type of work. If you're taking in 5 billion requests per day, there's no way it's going to be cheaper on site than in public cloud.
Actually, the size of the load doesn't matter much. If it is 5billion every day, cloud will be the worst option. If it is 5 bil one day and 1 bil the next then back up and all over, that alone is when public cloud competes. No workload becomes public cloud viable based on size, ever. Only on elasticity. Elasticity is the sole benefit to cloud architecture. It's a huge one, but the only one.
Yeah that's just not true. A) You're assuming that compute is the biggest factor here, B) It seems as though you're assuming they are making home grown things like just setting up NGINX proxies to everything.
Specifically the company I was talking about said their biggest cost was data going back out. Which they could easily mitigate with reducing HTTP headers, reducing TLS handshakes, and ended up using DigiCert because that cert was much smaller than the previous one they had which in turn limited outgoing data transfer.
There's a ton that goes into this and you can't just say it's cheaper to buy your own server vs rent a compute space. Things like K8s, "serverless" (whether you like that term or not), hosted ML, etc have completely changed the landscape.
-
@scottalanmiller said in VMware Community Homelabs:
@stacksofplates said in VMware Community Homelabs:
However if you're running large data lakes and using ML you're not running your own infrastructure.
Actually we are testing moving ML to our own infrastructure because it'll be cheaper.
So clearly I meant machine learning because I was referencing a data lake. Mangolassi doesn't talk to any data lakes as far as I know.
-
I find it funny for years @scottalanmiller has said that companies should be using cloud because of the multitude of benefits like security, cost, reliability, etc and has recently seemed to be backpedaling.
-
@stacksofplates said in VMware Community Homelabs:
@scottalanmiller said in VMware Community Homelabs:
@stacksofplates said in VMware Community Homelabs:
However if you're running large data lakes and using ML you're not running your own infrastructure.
Actually we are testing moving ML to our own infrastructure because it'll be cheaper.
So clearly I meant machine learning because I was referencing a data lake. Mangolassi doesn't talk to any data lakes as far as I know.
Oh sorry, really thought you were using ML as an example case.
-
@stacksofplates said in VMware Community Homelabs:
I find it funny for years @scottalanmiller has said that companies should be using cloud because of the multitude of benefits like security, cost, reliability, etc and has recently seemed to be backpedaling.
Security, yes. Hosted, yes. But not cloud computing. Cost is higher, obviously. I've written about that a lot.
You are using my discussions about using SaaS to discuss an IaaS lab. Clearly out of context and the opposite of backpeddling.
-
@scottalanmiller said in VMware Community Homelabs:
@stacksofplates said in VMware Community Homelabs:
I find it funny for years @scottalanmiller has said that companies should be using cloud because of the multitude of benefits like security, cost, reliability, etc and has recently seemed to be backpedaling.
Security, yes. Hosted, yes. But not cloud computing. Cost is higher, obviously. I've written about that a lot.
Umm you've clearly stated multiple times that places like Vultr are "cloud". Just because you treat it as a VPS doesn't mean it isn't cloud.
-
The primary value to something like Office 365 is Microsoft's expertise in their own products. That doesn't apply to what we are talking about here.
-
@stacksofplates said in VMware Community Homelabs:
@scottalanmiller said in VMware Community Homelabs:
@stacksofplates said in VMware Community Homelabs:
I find it funny for years @scottalanmiller has said that companies should be using cloud because of the multitude of benefits like security, cost, reliability, etc and has recently seemed to be backpedaling.
Security, yes. Hosted, yes. But not cloud computing. Cost is higher, obviously. I've written about that a lot.
Umm you've clearly stated multiple times that places like Vultr are "cloud". Just because you treat it as a VPS doesn't mean it isn't cloud.
It is, and I'm saying here that it costs more. I'm not arguing that Vultr isn't cloud, I'm pointing out what should be well known that for workloads of any scale, it's more costly.
Hence why we are looking to move some off of it.
-
Where have I said that cloud (like Vultr even) is cheaper than alternatives, except for where the scale is ridiculously small? I have no knowledge of ever having said this.
-
Going back to 2013 I had this pretty laid out. Yes this is looking at private, but mostly applies to public, too.
https://smbitjournal.com/2013/06/when-to-consider-a-private-cloud/
-
@stacksofplates said in VMware Community Homelabs:
@scottalanmiller said in VMware Community Homelabs:
@stacksofplates said in VMware Community Homelabs:
It 100% depends on the type of work. If you're taking in 5 billion requests per day, there's no way it's going to be cheaper on site than in public cloud.
Actually, the size of the load doesn't matter much. If it is 5billion every day, cloud will be the worst option. If it is 5 bil one day and 1 bil the next then back up and all over, that alone is when public cloud competes. No workload becomes public cloud viable based on size, ever. Only on elasticity. Elasticity is the sole benefit to cloud architecture. It's a huge one, but the only one.
Yeah that's just not true. A) You're assuming that compute is the biggest factor here, B) It seems as though you're assuming they are making home grown things like just setting up NGINX proxies to everything.
Specifically the company I was talking about said their biggest cost was data going back out. Which they could easily mitigate with reducing HTTP headers, reducing TLS handshakes, and ended up using DigiCert because that cert was much smaller than the previous one they had which in turn limited outgoing data transfer.
There's a ton that goes into this and you can't just say it's cheaper to buy your own server vs rent a compute space. Things like K8s, "serverless" (whether you like that term or not), hosted ML, etc have completely changed the landscape.
Have they changed the landscape? How is the data going in and out affected? You have no reason to have ingress / egress issues cloud or non-cloud there. Treat them equally, the problems are the same.
-
@scottalanmiller said in VMware Community Homelabs:
@stacksofplates said in VMware Community Homelabs:
@scottalanmiller said in VMware Community Homelabs:
@stacksofplates said in VMware Community Homelabs:
It 100% depends on the type of work. If you're taking in 5 billion requests per day, there's no way it's going to be cheaper on site than in public cloud.
Actually, the size of the load doesn't matter much. If it is 5billion every day, cloud will be the worst option. If it is 5 bil one day and 1 bil the next then back up and all over, that alone is when public cloud competes. No workload becomes public cloud viable based on size, ever. Only on elasticity. Elasticity is the sole benefit to cloud architecture. It's a huge one, but the only one.
Yeah that's just not true. A) You're assuming that compute is the biggest factor here, B) It seems as though you're assuming they are making home grown things like just setting up NGINX proxies to everything.
Specifically the company I was talking about said their biggest cost was data going back out. Which they could easily mitigate with reducing HTTP headers, reducing TLS handshakes, and ended up using DigiCert because that cert was much smaller than the previous one they had which in turn limited outgoing data transfer.
There's a ton that goes into this and you can't just say it's cheaper to buy your own server vs rent a compute space. Things like K8s, "serverless" (whether you like that term or not), hosted ML, etc have completely changed the landscape.
Have they changed the landscape? How is the data going in and out affected? You have no reason to have ingress / egress issues cloud or non-cloud there. Treat them equally, the problems are the same.
Eff it did the sunglasses again.
I don't get what you mean. You're charged for data going out but not coming in. The biggest cost they had was data going back to clients. So they minimized the data by the methods I mentioned.
-
@stacksofplates said in VMware Community Homelabs:
I don't get what you mean. You're charged for data going out but not coming in.
We aren't charged for either. I don't understand where you are getting this assumed cost.
-
@stacksofplates said in VMware Community Homelabs:
The biggest cost they had was data going back to clients. So they minimized the data by the methods I mentioned.
I get that, but couldn't they have done that equally cloud or alternative?
-
@stacksofplates said in VMware Community Homelabs:
Eff it did the sunglasses again.
It loves doing those.
-
@stacksofplates said in VMware Community Homelabs:
There's a ton that goes into this and you can't just say it's cheaper to buy your own server vs rent a compute space. Things like K8s, "serverless" (whether you like that term or not), hosted ML, etc have completely changed the landscape.
These are cool techs, for sure. But if you need these, and you get beyond a basic scale (cloud is generally cheaper at the itty bitty scales before you get to the size of a single server) then you can implement your own at lower cost.
The "pre-built" value of most cloud providers means that there is a barrier to entry for non-cloud options. But once you breach that limit, cloud is rarely cost effective. It just gets more and more expensive, unless you need the elastic capacity and enough elastic capacity to overcome the cost difference, which is often pretty tough to do.
I've worked in some pretty large, pretty elastic environments and while they went cloud in many cases, they ended up paying a premium for it once all was said and done.