Testing oVirt...
-
@obsolesce How many opensource OS and related products have you overseen from upstream ingestion to release?
Windows, actually, doesn't have much QA done, they release as soon as they stabilize, this is why no Windows is usable until SP1 at least. That's how they killed Netware, where the QA cycle was around 18 months, but the OS came out absolutely solid and bulletproof. And nothing changes since, except the adoption of the DevOps methodologies, which speed release up even more.
-
@dyasny said in Testing oVirt...:
@obsolesce How many opensource OS and related products have you overseen from upstream ingestion to release?
Windows, actually, doesn't have much QA done, they release as soon as they stabilize, this is why no Windows is usable until SP1 at least. That's how they killed Netware, where the QA cycle was around 18 months, but the OS came out absolutely solid and bulletproof. And nothing changes since, except the adoption of the DevOps methodologies, which speed release up even more.
Ya obviously there's no good QA at Microsoft or their stuff wouldn't be broken every month. I get that lol.
But that wasn't my point at all.
-
@obsolesce it just seems to me (and I might be wrong here of course) that you are speaking as an end user, a consumer of an opensource solution, and you do not know what is happening before the solution is released to the end user. Hence my question, which you ignored.
I'll make it clear though - for an enterprise level product to be released, even if it pulls in upstream code, everything has to be retested, both the functionality and the integration with the downstream stack. It would not be an enterprise product otherwise. And yes, this is no theory, I've been a part of this process in various capacities for over a decade now.
-
@dyasny said in Testing oVirt...:
@travisdh1 dnf or yum or whatever, it's just a utility name, you know what I'm talking about, that's what's important
:thumbs_up:
Frankly, knowing how often this (
fedy
) procedure breaks things or leaves muddy tracks all over the carpets, if you will, I'd hate to be responsible for a production setup where this is common practice. In disposable VMs - sure, who cares, just spawn some more if it dies. But on production platform machines - I like to be able to have weekends and see my family sometimes too muchI actually agree with you that
fedup
was bad, which is why it was only around for a short time. The dnf tooling has been rock solid since they moved to it. -
@dyasny said in Testing oVirt...:
...everything has to be retested, both the functionality and the integration with the downstream stack.
That is exactly how I "summed it up" here in children's terms:
@obsolesce said in Testing oVirt...:
Fedora verify it's working smoothing together, picking stable releases of every package that goes in to it, and making sure they are happy together.
-
@dyasny said in Testing oVirt...:
I'll make it clear though - for an enterprise level product to be released, even if it pulls in upstream code, everything has to be retested, both the functionality and the integration with the downstream stack. It would not be an enterprise product otherwise. And yes, this is no theory, I've been a part of this process in various capacities for over a decade now.
But nobody does this for every package included in a repository for every release that I know of. That would mean billions of tests for a modern distribution!
-
@travisdh1 said in Testing oVirt...:
I actually agree with you that
fedup
was bad, which is why it was only around for a short time. The dnf tooling has been rock solid since they moved to it.Which is why
dnf
is going to be in EL8 (afaik) -
@dyasny said in Testing oVirt...:
@travisdh1 said in Testing oVirt...:
I actually agree with you that
fedup
was bad, which is why it was only around for a short time. The dnf tooling has been rock solid since they moved to it.Which is why
dnf
is going to be in EL8 (afaik)So then CentOS 8 too? Is that a reasonable assumption?
-
@travisdh1 said in Testing oVirt...:
But nobody does this for every package included in a repository for every release that I know of. That would mean billions of tests for a modern distribution!
The Red Hat moto has always been "if we ship it, we support it", and to support something they have to test it. This is why the EL repos aren't as full of stuff as the upstream, you are correct it is impossible to test the whole world.
-
@obsolesce said in Testing oVirt...:
So then CentOS 8 too? Is that a reasonable assumption?
If
dnf
is in RHEL8, it will also be in CentOS8, no doubt. -
@dyasny said in Testing oVirt...:
@travisdh1 said in Testing oVirt...:
But nobody does this for every package included in a repository for every release that I know of. That would mean billions of tests for a modern distribution!
The Red Hat moto has always been "if we ship it, we support it", and to support something they have to test it. This is why the EL repos aren't as full of stuff as the upstream, you are correct it is impossible to test the whole world.
Even Red Hat can't test every combination of every package in their repository. Which is what brought on my previous statement.
-
@travisdh1 said in Testing oVirt...:
Even Red Hat can't test every combination of every package in their repository. Which is what brought on my previous statement.
There is hardly need to test every package separately, they usually constitute a product of a stack, which is tested, extensively
-
@obsolesce said in Testing oVirt...:
@dyasny said in Testing oVirt...:
@travisdh1 said in Testing oVirt...:
I actually agree with you that
fedup
was bad, which is why it was only around for a short time. The dnf tooling has been rock solid since they moved to it.Which is why
dnf
is going to be in EL8 (afaik)So then CentOS 8 too? Is that a reasonable assumption?
They are one and the same. There are no packages different.
-
@dyasny said in Testing oVirt...:
@scottalanmiller EL is a platform, with the current container craze, all it really needs to be good at is running containers and supporting hardware well.
Containers != Enterprise software deployments. And it's a fad. If this is the basis for RHEL being seen as enterprise and Fedora not, I feel that makes me feel more confident, rather than less.
Fedora is rock solid on containers too, but with later tech. If we don't care about the packages that come with the OS, and only the most basic pieces, Fedora blows CentOS out of the water.
If containers are the current standard for "enterprise", then I'm again, in the Fedora camp. I've seen the problems and instability with containers (presuming you mean Docker, not LXC) and yeah, that's what the kids trying to get jobs based on resume words do, but for enterprise workloads that actually matter, that's anything but the norm.
Containers are great for testing things, and sometimes great for internally controlled software. But for deploying other peoples' code... see all the threads where we've discussed how they aren't reliable because Docker just doesn't address compatibility issues in the real world. They put on a good marketing blitz, but it doesn't hold up in practice. Maybe someday, but "someday" comes several years earlier on Fedora than on CentOS / RHEL.
-
@scottalanmiller said in Testing oVirt...:
Containers != Enterprise software deployments. And it's a fad.
Yeah, only 10 years ago people said that about VMs
Fedora is rock solid on containers too, but with later tech. If we don't care about the packages that come with the OS, and only the most basic pieces, Fedora blows CentOS out of the water.
And again, you say "rock solid" but provide no proof. Can you show any research, benchmarks, stats, anything that shows Fedora is actually better and more stable than an EL distribution? And if you cannot, how about a man-hour comparison of engineering and QA effort that went into either? You know full well Fedora and any other non-enterprise distro can't compare, not even close.
If containers are the current standard for "enterprise", then I'm again, in the Fedora camp. I've seen the problems and instability with containers (presuming you mean Docker, not LXC) and yeah, that's what the kids trying to get jobs based on resume words do, but for enterprise workloads that actually matter, that's anything but the norm.
That hasn't been my experience. Just like before clouds became a thing and VMs were new, everyone was after talent who knew how to build virtualized DCs, it is all about containers now. And containers are the craze because they are so easy to automate. And guess which kind of distro is easier to automate and keep automated - one with stable APIs and handles, or one which changes things on the fly without really caring what your particular code does
Containers are great for testing things, and sometimes great for internally controlled software. But for deploying other peoples' code... see all the threads where we've discussed how they aren't reliable because Docker just doesn't address compatibility issues in the real world.
Plenty of problems there, and of course containers don't fit any workload pattern (though there are advances there, for example persistent storage and kubevirt to name a couple), but this is where the industry is not just going, but has been at for a while now. It's a large industry, and I know in some parts of it things are still in the dark ages (especially in SMBs, who are still running Windows SBS 2011 and don't really need anything else), but if you look at where the large corporations, containers are in production everywhere.
They put on a good marketing blitz, but it doesn't hold up in practice. Maybe someday, but "someday" comes several years earlier on Fedora than on CentOS / RHEL.
This "someday" is already here, and has been for a while. And anything that becomes interesting for the enterprise, EL (and the product portfolio based on it) is on exactly the same page as Fedora, that's how RHT became a multi-billion dollar open-source company.
-
@dyasny said in Testing oVirt...:
@scottalanmiller said in Testing oVirt...:
Containers != Enterprise software deployments. And it's a fad.
Yeah, only 10 years ago people said that about VMs
Well no, VMs have been the enterprise standard since 1964. Quite different. And containers aren't new, treating them like magic is the new fad.
Like ZFS. We had it a decade, then it became a fad that no one could live without, now no one remembers it.
VMs are tried and true, as are true containers. But the Docker craze... that's a fad.
-
@dyasny said in Testing oVirt...:
If containers are the current standard for "enterprise", then I'm again, in the Fedora camp. I've seen the problems and instability with containers (presuming you mean Docker, not LXC) and yeah, that's what the kids trying to get jobs based on resume words do, but for enterprise workloads that actually matter, that's anything but the norm.
That hasn't been my experience. Just like before clouds became a thing and VMs were new, everyone was after talent who knew how to build virtualized DCs, it is all about containers now. And containers are the craze because they are so easy to automate. And guess which kind of distro is easier to automate and keep automated - one with stable APIs and handles, or one which changes things on the fly without really caring what your particular code does
That's the problem with stability issues. People who don't have issues aren't good references. Its' finding people for whom they aren't stable that tell the story. Unless containers are universally stable for everyone, they aren't stable.
Containers claim to be so easy to automate, but again, not our experience. Automation is already so easy. Most people raving about containers seem to be doing so without understanding other automation options. I'm not saying that Docker is bad, just that it is overblown and really so full of hype at this point that it's ridiculous. Great idea, useful, has its place, but not the place it has been elevated to.
-
@dyasny said in Testing oVirt...:
Containers are great for testing things, and sometimes great for internally controlled software. But for deploying other peoples' code... see all the threads where we've discussed how they aren't reliable because Docker just doesn't address compatibility issues in the real world.
Plenty of problems there, and of course containers don't fit any workload pattern (though there are advances there, for example persistent storage and kubevirt to name a couple), but this is where the industry is not just going, but has been at for a while now.
Right, but if they aren't the answer to every workload, then we are back to needing the OS to support a range of things, not just containers
-
@dyasny said in Testing oVirt...:
@travisdh1 said in Testing oVirt...:
But nobody does this for every package included in a repository for every release that I know of. That would mean billions of tests for a modern distribution!
The Red Hat moto has always been "if we ship it, we support it", and to support something they have to test it. This is why the EL repos aren't as full of stuff as the upstream, you are correct it is impossible to test the whole world.
Yes, and their support is excellent. One of the best in the business.
-
I like this take on Docker from an important database vendor: "Running Scylla in Docker is the simplest way to experiment with Scylla and we highly recommend it. However, running stateful containers is complex and tuning is needed to maximize the performance. We recommend that you use packages..."
I agree with this. Docker is great for testing, absolutely excellent. And some workloads, it's great for deploying (especially when it is internal code that you control and know it will be compatible.)