VMware Community Homelabs
-
@Obsolesce said in VMware Community Homelabs:
Whatever it is, excluding some very specific scenarios, I'd rather learn and build it in Azure or AWS... You know, Two birds one stone.
I get that. I think, at least in the SMB space, it's the opposite, though. Azure and AWS are both rare and easy to pick up. But hardware and platforms you need all of the time.
If in the enterprise, then your way makes way more sense.
-
@Obsolesce said in VMware Community Homelabs:
Generally, i lab things out for a little and trash it. My costs typically never exceed a few bucks a month in Azure, and on AWS the free tier and costs are even better when buying those $300 credits for $20 or whatever it is. Mix and match. You get the most experience and bang for your buck.
I've found for me, that some of the best lab stuff is not setting up and tearing down, but setting up to keep operating. You get a whole different level of experience when you keep it running, patch it, maintain it, etc.
-
@scottalanmiller said in VMware Community Homelabs:
@Obsolesce said in VMware Community Homelabs:
Generally, i lab things out for a little and trash it. My costs typically never exceed a few bucks a month in Azure, and on AWS the free tier and costs are even better when buying those $300 credits for $20 or whatever it is. Mix and match. You get the most experience and bang for your buck.
I've found for me, that some of the best lab stuff is not setting up and tearing down, but setting up to keep operating. You get a whole different level of experience when you keep it running, patch it, maintain it, etc.
For my labs, I normally keep mine running especially if I don’t normally use them for work.
-
So @Obsolesce is talking about cloud - ASW/Asure, but what about other VPS providers like Vultr? Compared to owning your own hardware, unless you have some fairly large workloads, these are generally pretty cheap - and this isn't even considering power/cooling, etc.
Of course, if your goal is the learn ESXi or KVM, etc, yeah, you're going to need some hardware for that.
-
Most IT people I know get their hardware for nothing where they work. So they have stuff like R710's at home.
So setting up a home lab isn't a cost thing, more a question of having the space. -
@Pete-S said in VMware Community Homelabs:
Most IT people I know get their hardware for nothing where they work. So they have stuff like R710's at home.
So setting up a home lab isn't a cost thing, more a question of having the space.The cost of power to run that 710 can likely make it more cost effective to run the VMs in a VPS. plus that gets rid of the noise pollution.
-
@Dashrender said in VMware Community Homelabs:
@Pete-S said in VMware Community Homelabs:
Most IT people I know get their hardware for nothing where they work. So they have stuff like R710's at home.
So setting up a home lab isn't a cost thing, more a question of having the space.The cost of power to run that 710 can likely make it more cost effective to run the VMs in a VPS. plus that gets rid of the noise pollution.
Depends on how many VMs you have. A R710 can easily handle twenty of those $5 vultr VMs. That's $1200 per year for VMs in a VPS. No way you can rack up that amount of money in electricity on one server.
-
@Dashrender said in VMware Community Homelabs:
ASW/Asure, but what about other VPS providers like Vultr? Compared to owning your own hardware, unless you have some fairly large workloads, these are generally pretty cheap
Cost is similar to AWS or Azure. It's surprisingly not as cheap as it seems. If you are only talking about two temporary VMs, yeah, it's cheap. If you talking about some number of long term workloads, it gets costly quickly.
-
@Pete-S said in VMware Community Homelabs:
@Dashrender said in VMware Community Homelabs:
@Pete-S said in VMware Community Homelabs:
Most IT people I know get their hardware for nothing where they work. So they have stuff like R710's at home.
So setting up a home lab isn't a cost thing, more a question of having the space.The cost of power to run that 710 can likely make it more cost effective to run the VMs in a VPS. plus that gets rid of the noise pollution.
Depends on how many VMs you have. A R710 can easily handle twenty of those $5 vultr VMs. That's $1200 per year for VMs in a VPS.
No way you can rack up that in electricity.Plus the needs of a lab VM are often very different from the needs of a production one. Prod needs fast disks and fast CPU, and "just enough" RAM. Labs need very little CPU and disk performance, but lots of RAM.
And just one workload like NextCloud could cost a fortune on even Vultr, but be nearly free on an R710.
We have old R510 units that could run 30+ VMs, easily. A good 50% more than @Pete-S is estimating. And adding RAM alone would allow us to up that number significantly.
-
@Pete-S said in VMware Community Homelabs:
@Obsolesce said in VMware Community Homelabs:
@FATeknollogee said in VMware Community Homelabs:
Why don't we see this from the other Type-1 groups?
One thing the VMware "community" is very good at is grass roots movement.
From @lamw on Twitter:
https://www.virtuallyghetto.com/2020/02/vmware-community-homelabs-project.htmlBecause there is thing thing called the cloud, which is way cheaper and way more beneficial and efficient than spending thousands on hardware upfront.
Whatever it is, excluding some very specific scenarios, I'd rather learn and build it in Azure or AWS... You know, Two birds one stone.
The cloud is flexible but not cheap. It's only cheaper if your needs are very small or very dynamic. If you need performance or storage, cloud instances will get proportionally even more expensive.
And they know it of course. That's why they expand their services to make it harder to move the workload to another cloud provider or your own datacenter.
It 100% depends on the type of work. If you're taking in 5 billion requests per day, there's no way it's going to be cheaper on site than in public cloud.
Even with CFD type work where you'd normally have a cluster. You have to weigh the capex of the servers and the ongoing cost of the maintenance and utilities vs just spinning up 10-20 nodes when you need it. However if you get into long running solves like 6 months or so, then it might be cheaper on site. It really depends on the work load.
-
@FATeknollogee said in VMware Community Homelabs:
Why don't we see this from the other Type-1 groups?
One thing the VMware "community" is very good at is grass roots movement.
From @lamw on Twitter:
https://www.virtuallyghetto.com/2020/02/vmware-community-homelabs-project.htmlThere's a ton of stuff out there on IRC, Reddit, Slack, Telegram, and other mediums for the other types of servers.
https://www.reddit.com/r/homelab/ I mean this is literally people just posting their home labs and specs. I'm not sure what else you want?
-
-
@stacksofplates said in VMware Community Homelabs:
@Pete-S said in VMware Community Homelabs:
@Obsolesce said in VMware Community Homelabs:
@FATeknollogee said in VMware Community Homelabs:
Why don't we see this from the other Type-1 groups?
One thing the VMware "community" is very good at is grass roots movement.
From @lamw on Twitter:
https://www.virtuallyghetto.com/2020/02/vmware-community-homelabs-project.htmlBecause there is thing thing called the cloud, which is way cheaper and way more beneficial and efficient than spending thousands on hardware upfront.
Whatever it is, excluding some very specific scenarios, I'd rather learn and build it in Azure or AWS... You know, Two birds one stone.
The cloud is flexible but not cheap. It's only cheaper if your needs are very small or very dynamic. If you need performance or storage, cloud instances will get proportionally even more expensive.
And they know it of course. That's why they expand their services to make it harder to move the workload to another cloud provider or your own datacenter.
It 100% depends on the type of work. If you're taking in 5 billion requests per day, there's no way it's going to be cheaper on site than in public cloud.
Even with CFD type work where you'd normally have a cluster. You have to weigh the capex of the servers and the ongoing cost of the maintenance and utilities vs just spinning up 10-20 nodes when you need it. However if you get into long running solves like 6 months or so, then it might be cheaper on site. It really depends on the work load.
What home lab is going to be serving 5 billion requests per day? You're talking production, not home lab.
-
@scottalanmiller said in VMware Community Homelabs:
@Obsolesce said in VMware Community Homelabs:
Generally, i lab things out for a little and trash it. My costs typically never exceed a few bucks a month in Azure, and on AWS the free tier and costs are even better when buying those $300 credits for $20 or whatever it is. Mix and match. You get the most experience and bang for your buck.
I've found for me, that some of the best lab stuff is not setting up and tearing down, but setting up to keep operating. You get a whole different level of experience when you keep it running, patch it, maintain it, etc.
I do that when it's something I'm using past the testing/labbing experience. But then at that point it's not so much a test lab anymore.
It's hard to keep something going you never really use... typically forget about because patching can be automatic, but when not, even maintaining something you don't use much is kind of.... I don't know, wasteful IMO. Because you can be using those resources towards something you will be actively maintaining and using while learning. (given you are talking about platform test lab, which means that hardware is dedicated to that purpose) Perhaps if it's a platform, like you want to run Openstack or something to get experience as many large companies use that (not sure about SMB).
I do get the other side too. There are many things in SMB you can better lab or experience on your own hardware, because that's where most SMBs are coming from, and many either lack the need to move away from it, or lack the competence and culture to move to cloud.
Either way, it depends on where you want to go with your career and what environments you want to work with.
-
@Dashrender said in VMware Community Homelabs:
So @Obsolesce is talking about cloud - ASW/Asure, but what about other VPS providers like Vultr?
Those other VPS providers are irrelevant where I work. It's all AWS, Azure, GCP. If it's not one of those, they are running their own private cloud with OpenStack. Therefore, I'm not going to waste time learning a service I would never use outside of personal use (and yes, I used it personally, but not as a lab). There's always more to learn in AWS or Azure for example. Time labbing in Vultr for career deveopment, I feel, would be better used elsewhere.
That's just me though... it's because of where I am currently working, and also because any future employer I would choose as well. YMMV.
-
@scottalanmiller said in VMware Community Homelabs:
@Pete-S said in VMware Community Homelabs:
@Dashrender said in VMware Community Homelabs:
@Pete-S said in VMware Community Homelabs:
Most IT people I know get their hardware for nothing where they work. So they have stuff like R710's at home.
So setting up a home lab isn't a cost thing, more a question of having the space.The cost of power to run that 710 can likely make it more cost effective to run the VMs in a VPS. plus that gets rid of the noise pollution.
Depends on how many VMs you have. A R710 can easily handle twenty of those $5 vultr VMs. That's $1200 per year for VMs in a VPS.
No way you can rack up that in electricity.Plus the needs of a lab VM are often very different from the needs of a production one. Prod needs fast disks and fast CPU, and "just enough" RAM. Labs need very little CPU and disk performance, but lots of RAM.
And just one workload like NextCloud could cost a fortune on even Vultr, but be nearly free on an R710.
We have old R510 units that could run 30+ VMs, easily. A good 50% more than @Pete-S is estimating. And adding RAM alone would allow us to up that number significantly.
If I need a bunch of VMs to test/lab things, I'll use Hyper-V on my laptop (shouldn't have to mention this, but I"m sure it'll be pointed out- not talking about platform labs here). Lots of RAM in PCs is much more doable now, and can take you pretty far. Some business laptops give you 64 GB of ram.... that's more than enough to set up some labs.
-
@stacksofplates said in VMware Community Homelabs:
@Pete-S said in VMware Community Homelabs:
@Obsolesce said in VMware Community Homelabs:
@FATeknollogee said in VMware Community Homelabs:
Why don't we see this from the other Type-1 groups?
One thing the VMware "community" is very good at is grass roots movement.
From @lamw on Twitter:
https://www.virtuallyghetto.com/2020/02/vmware-community-homelabs-project.htmlBecause there is thing thing called the cloud, which is way cheaper and way more beneficial and efficient than spending thousands on hardware upfront.
Whatever it is, excluding some very specific scenarios, I'd rather learn and build it in Azure or AWS... You know, Two birds one stone.
The cloud is flexible but not cheap. It's only cheaper if your needs are very small or very dynamic. If you need performance or storage, cloud instances will get proportionally even more expensive.
And they know it of course. That's why they expand their services to make it harder to move the workload to another cloud provider or your own datacenter.
It 100% depends on the type of work. If you're taking in 5 billion requests per day, there's no way it's going to be cheaper on site than in public cloud.
Even with CFD type work where you'd normally have a cluster. You have to weigh the capex of the servers and the ongoing cost of the maintenance and utilities vs just spinning up 10-20 nodes when you need it. However if you get into long running solves like 6 months or so, then it might be cheaper on site. It really depends on the work load.
I think you're wrong. 5 billion hits per day is Google type traffic (a couple of years ago). And Google don't use the public cloud, they use their own servers. As do Facebook, Amazon, Ebay, Microsoft etc. People like Backblaze don't use the cloud.
The one company I know that I would expect to run their own servers but don't, are Netflix. They are on AWS. LinkedIn are also moving away from their own servers but that's not surprising since Microsoft owns them. I'm not sure they are actually running on Azure. It could be that they are just using Microsoft servers instead of their own.
Finance can calculate what's best but just because you're owning your server park doesn't mean you have to pay for it up front. It doesn't mean you don't have geo-redundancy or that it's all in one place. It doesn't mean you have to employ people that swaps hardware 24/7. And it doesn't mean you can't use cloud servers when you need.
-
@Pete-S I know that Microsoft eats it's own dog food. Can't speak for the others.
-
@Obsolesce said in VMware Community Homelabs:
@Pete-S I know that Microsoft eats it's own dog food. Can't speak for the others.
I'm not sure you can say that Microsoft is on the public cloud when it's their servers and they own the hardware.
If the public cloud was cheaper than running their own hardware, Microsoft should move O365 and all their services to AWS. They would make a lot of money and not having to buy their own servers would be a great benefit.
-
@Pete-S said in VMware Community Homelabs:
I'm not sure you can say that Microsoft is on the public cloud
Yes, I can say that:
https://uk.pcmag.com/windows-10/118132/microsofts-cloud-how-the-company-eats-its-own-dog-food