Testing oVirt...
-
@dyasny said in Testing oVirt...:
@scottalanmiller said in Testing oVirt...:
Containers != Enterprise software deployments. And it's a fad.
Yeah, only 10 years ago people said that about VMs
Well no, VMs have been the enterprise standard since 1964. Quite different. And containers aren't new, treating them like magic is the new fad.
Like ZFS. We had it a decade, then it became a fad that no one could live without, now no one remembers it.
VMs are tried and true, as are true containers. But the Docker craze... that's a fad.
-
@dyasny said in Testing oVirt...:
If containers are the current standard for "enterprise", then I'm again, in the Fedora camp. I've seen the problems and instability with containers (presuming you mean Docker, not LXC) and yeah, that's what the kids trying to get jobs based on resume words do, but for enterprise workloads that actually matter, that's anything but the norm.
That hasn't been my experience. Just like before clouds became a thing and VMs were new, everyone was after talent who knew how to build virtualized DCs, it is all about containers now. And containers are the craze because they are so easy to automate. And guess which kind of distro is easier to automate and keep automated - one with stable APIs and handles, or one which changes things on the fly without really caring what your particular code does
That's the problem with stability issues. People who don't have issues aren't good references. Its' finding people for whom they aren't stable that tell the story. Unless containers are universally stable for everyone, they aren't stable.
Containers claim to be so easy to automate, but again, not our experience. Automation is already so easy. Most people raving about containers seem to be doing so without understanding other automation options. I'm not saying that Docker is bad, just that it is overblown and really so full of hype at this point that it's ridiculous. Great idea, useful, has its place, but not the place it has been elevated to.
-
@dyasny said in Testing oVirt...:
Containers are great for testing things, and sometimes great for internally controlled software. But for deploying other peoples' code... see all the threads where we've discussed how they aren't reliable because Docker just doesn't address compatibility issues in the real world.
Plenty of problems there, and of course containers don't fit any workload pattern (though there are advances there, for example persistent storage and kubevirt to name a couple), but this is where the industry is not just going, but has been at for a while now.
Right, but if they aren't the answer to every workload, then we are back to needing the OS to support a range of things, not just containers
-
@dyasny said in Testing oVirt...:
@travisdh1 said in Testing oVirt...:
But nobody does this for every package included in a repository for every release that I know of. That would mean billions of tests for a modern distribution!
The Red Hat moto has always been "if we ship it, we support it", and to support something they have to test it. This is why the EL repos aren't as full of stuff as the upstream, you are correct it is impossible to test the whole world.
Yes, and their support is excellent. One of the best in the business.
-
I like this take on Docker from an important database vendor: "Running Scylla in Docker is the simplest way to experiment with Scylla and we highly recommend it. However, running stateful containers is complex and tuning is needed to maximize the performance. We recommend that you use packages..."
I agree with this. Docker is great for testing, absolutely excellent. And some workloads, it's great for deploying (especially when it is internal code that you control and know it will be compatible.)
-
@scottalanmiller said in Testing oVirt...:
Well no, VMs have been the enterprise standard since 1964. Quite different. And containers aren't new, treating them like magic is the new fad.
Actually, mainframe partitions are much closer to containers than to VMs. Containers became possible on x86 only with the feature completeness of cgroups and kernel namespaces, before that, OVZ wasn't too bad (but Parallels was and is). Besides, scaling that entire kitchen was a problem without the advent of SDN.
Like ZFS. We had it a decade, then it became a fad that no one could live without, now no one remembers it.
No, ZoL, IMO is still a piece of dung. And Solaris is a no-go OS nowadays, so I just keep away. VDO is nice if you need dedupe, for all the other features, there are solutions available too.
VMs are tried and true, as are true containers. But the Docker craze... that's a fad.
VMs were not tried and true before ~2010-ish, when the tech became more or less commonplace and "boring".
Docker itself isn't great, it was just the first of the emerging systems utilising cgroups and namespaces properly. I used CRI-O for a few months and it was much faster and more stable. The point here is, containers aren't a fad, just like VMs, they are here to stay and get used everywhere. And just like some VM technologies, some types will go away (like Xen in the enterprise and vbox becoming desktop niche) and some becoming the default choice, like VMWare and KVM (well, some Hyper-V for the folks who are stuck on Windows of course), docker might not stick around in the end, but containerization will.
-
@dyasny said in Testing oVirt...:
Like ZFS. We had it a decade, then it became a fad that no one could live without, now no one remembers it.
No, ZoL, IMO is still a piece of dung. And Solaris is a no-go OS nowadays, so I just keep away. VDO is nice if you need dedupe, for all the other features, there are solutions available too.
Yeah, but the frenzy around it was crazy. Seriously nuts. People were out of their minds in love with ZFS to the point that they based whole infrastructure decisions around getting it (and on FreeBSD no less.)
-
@scottalanmiller said in Testing oVirt...:
I like this take on Docker from an important database vendor: "Running Scylla in Docker is the simplest way to experiment with Scylla and we highly recommend it. However, running stateful containers is complex and tuning is needed to maximize the performance. We recommend that you use packages..."
And yet, it is now possible to run a stateful database which is very close to the metal, in docker, with no performance losses. Moreover, if the container dies, you do not reinstall, you simply respawn the container, and if the storage survived - simply attach it when you spawn.
I agree with this. Docker is great for testing, absolutely excellent. And some workloads, it's great for deploying (especially when it is internal code that you control and know it will be compatible.)
Microservices. When all components are independent daemons, talking over a common message bus or API, keeping them containerized (note how I don't mention docker specifically) makes keeping the system up very easy.
There's a good reason even a monster like Openstack is moving towards containerizing all the various services it is running
-
@dyasny said in Testing oVirt...:
VMs were not tried and true before ~2010-ish, when the tech became more or less commonplace and "boring".
They were. You are thinking x86 commodity space. But in the enterprise, we were using them heavily for a very, very long time.
-
@dyasny said in Testing oVirt...:
@scottalanmiller said in Testing oVirt...:
I like this take on Docker from an important database vendor: "Running Scylla in Docker is the simplest way to experiment with Scylla and we highly recommend it. However, running stateful containers is complex and tuning is needed to maximize the performance. We recommend that you use packages..."
And yet, it is now possible to run a stateful database which is very close to the metal, in docker, with no performance losses. Moreover, if the container dies, you do not reinstall, you simply respawn the container, and if the storage survived - simply attach it when you spawn.
How's that different than not using Docker, though? I've had that capability for basically forever. That's not new or unique to Docker or containerization.
-
@scottalanmiller said in Testing oVirt...:
Yeah, but the frenzy around it was crazy. Seriously nuts. People were out of their minds in love with ZFS to the point that they based whole infrastructure decisions around getting it (and on FreeBSD no less.)
I've seen ZoL break way too many times to even consider it
-
@dyasny said in Testing oVirt...:
I agree with this. Docker is great for testing, absolutely excellent. And some workloads, it's great for deploying (especially when it is internal code that you control and know it will be compatible.)
Microservices. When all components are independent daemons, talking over a common message bus or API, keeping them containerized (note how I don't mention docker specifically) makes keeping the system up very easy.
There's a good reason even a monster like Openstack is moving towards containerizing all the various services it is running
Yes, if you have microservices, which is getting traction but will be a long time before most workloads are that way, it can be very good to have minuscule containers to handle them individually.
-
@dyasny said in Testing oVirt...:
@scottalanmiller said in Testing oVirt...:
Yeah, but the frenzy around it was crazy. Seriously nuts. People were out of their minds in love with ZFS to the point that they based whole infrastructure decisions around getting it (and on FreeBSD no less.)
I've seen ZoL break way too many times to even consider it
ZoL isn't where the frenzy was.
-
@scottalanmiller said in Testing oVirt...:
They were. You are thinking x86 commodity space. But in the enterprise, we were using them heavily for a very, very long time.
Like I said, LPARs and similar tech from other vendors (don't even remember the names now) were much closer to containers than to proper VMs.
-
@scottalanmiller said in Testing oVirt...:
Yes, if you have microservices, which is getting traction but will be a long time before most workloads are that way, it can be very good to have minuscule containers to handle them individually.
It's pretty much the default to all new software that gets developed. New version to existing legacy stuff is not included of course.
-
@dyasny said in Testing oVirt...:
@scottalanmiller said in Testing oVirt...:
They were. You are thinking x86 commodity space. But in the enterprise, we were using them heavily for a very, very long time.
Like I said, LPARs and similar tech from other vendors (don't even remember the names now) were much closer to containers than to proper VMs.
LPARs are traditionally considered the "most proper" VM, they are the heaviest weight. A full VM, like ESXi produces, is the closest thing to them in the commodity X86 space today. LPARs were nothing like containers. Containers share a kernel, LPARs shared nothing.
-
@scottalanmiller said in Testing oVirt...:
ZoL isn't where the frenzy was.
I missed the FBSD frenzy, in fact, I haven't seen anything resembling a frenzy around that old thing for about 10-12 years now. I wish there was one - moving companies to Linux from a pre-existing Unix setup is the easiest sell ever
-
@dyasny said in Testing oVirt...:
@scottalanmiller said in Testing oVirt...:
Yes, if you have microservices, which is getting traction but will be a long time before most workloads are that way, it can be very good to have minuscule containers to handle them individually.
It's pretty much the default to all new software that gets developed. New version to existing legacy stuff is not included of course.
Yes, but there is a lot of legacy stuff that isn't going anywhere. Most people have to deal with legacy stuff indefinitely.
-
@dyasny said in Testing oVirt...:
@scottalanmiller said in Testing oVirt...:
ZoL isn't where the frenzy was.
I missed the FBSD frenzy, in fact, I haven't seen anything resembling a frenzy around that old thing for about 10-12 years now. I wish there was one - moving companies to Linux from a pre-existing Unix setup is the easiest sell ever
No one cared that it was FreeBSD, it was 100% about ZFS. In fact, companies packaged FreeBSD to hide it and touted only ZFS as the reason to use their stuff.
-
@scottalanmiller said in Testing oVirt...:
Yes, but there is a lot of legacy stuff that isn't going anywhere. Most people have to deal with legacy stuff indefinitely.
I get recruiter calls all the time, and they all want the new shiny tech, not old legacy knowledge. At least all the recruiters who have a decent offer on hand. The ones who want old school sysadmins to work on old systems that aren't going anywhere, are offering miniscule wages.
And like I mentioned above - there are means of dealing with legacy stuff in containers, just like when vmware was starting to become prominent, a lot of effort was invested in supporting older OS inside a VM, so that people would be able to move away from old hardware