Virtual appliances?
-
I prefer using an iso. Almost never is a virtual image easy to use with my preferred virtualization host platform (KVM or Xen), and if using an appliance I really don't want to d*** around with a base OS first.
-
@travisdh1 said in Virtual appliances?:
I prefer using an iso. Almost never is a virtual image easy to use with my preferred virtualization host platform (KVM or Xen), and if using an appliance I really don't want to d*** around with a base OS first.
I like iso myself too as I've had some bad luck with ready-to-run images in the past. There always seems to be some kind of issue with different network drivers or installed guest additions.
I don't know the current status of ova files though. Is that the current standard for distributing virtual appliances that are supposed to run on every common virtualization platform? Or what just in theory?
-
This day and age Id just prefer a container. They're so much easier to deploy and manage.
-
@stacksofplates said in Virtual appliances?:
This day and age Id just prefer a container. They're so much easier to deploy and manage.
Only when done right, which is still not often, IMO.
-
@JaredBusch said in Virtual appliances?:
@stacksofplates said in Virtual appliances?:
This day and age Id just prefer a container. They're so much easier to deploy and manage.
Only when done right, which is still not often, IMO.
That argument could be made for pretty much anything though. I think even on a single host it's easier to manage.
-
I think it's more of the application now. If it was something designed for Windows 2003 and you put it in a container it wouls be terrible, but it would also be terrible installed normally. K8s is so easy to set up now that it's trivial to get things going. Even if you just use podman and systemd I think it's a step above installing the application in a VM.
-
@JaredBusch said in Virtual appliances?:
@stacksofplates said in Virtual appliances?:
This day and age Id just prefer a container. They're so much easier to deploy and manage.
Only when done right, which is still not often, IMO.
Btw not trying to argue with you. People def do it wrong. I'm just saying I've seen them do it wrong with VMs too.
-
@stacksofplates said in Virtual appliances?:
I think it's more of the application now. If it was something designed for Windows 2003 and you put it in a container it wouls be terrible, but it would also be terrible installed normally. K8s is so easy to set up now that it's trivial to get things going. Even if you just use podman and systemd I think it's a step above installing the application in a VM.
I need to try out K8s again. The first time I tried using it was early days and a pain. From what your saying it's a lot better/easier now.
-
@stacksofplates said in Virtual appliances?:
@JaredBusch said in Virtual appliances?:
@stacksofplates said in Virtual appliances?:
This day and age Id just prefer a container. They're so much easier to deploy and manage.
Only when done right, which is still not often, IMO.
Btw not trying to argue with you. People def do it wrong. I'm just saying I've seen them do it wrong with VMs too.
Oh I completely understand. Docker is super abused though.
-
@travisdh1 said in Virtual appliances?:
I prefer using an iso. Almost never is a virtual image easy to use with my preferred virtualization host platform (KVM or Xen), and if using an appliance I really don't want to d*** around with a base OS first.
I agree, if you are going to do this, I need an ISO.
-
@stacksofplates said in Virtual appliances?:
@JaredBusch said in Virtual appliances?:
@stacksofplates said in Virtual appliances?:
This day and age Id just prefer a container. They're so much easier to deploy and manage.
Only when done right, which is still not often, IMO.
Btw not trying to argue with you. People def do it wrong. I'm just saying I've seen them do it wrong with VMs too.
True, anything can be screwed up.
-
@stacksofplates said in Virtual appliances?:
@JaredBusch said in Virtual appliances?:
@stacksofplates said in Virtual appliances?:
This day and age Id just prefer a container. They're so much easier to deploy and manage.
Only when done right, which is still not often, IMO.
That argument could be made for pretty much anything though. I think even on a single host it's easier to manage.
True. I think the problem is that Docker feels like it's never set up correctly for third party application deployments. As a tech it's amazing, in the real world, it seems to result it devs bypassing all operational oversight and apps that have good code and no production way to deploy.
-
@Pete-S said in Virtual appliances?:
@travisdh1 said in Virtual appliances?:
I prefer using an iso. Almost never is a virtual image easy to use with my preferred virtualization host platform (KVM or Xen), and if using an appliance I really don't want to d*** around with a base OS first.
I like iso myself too as I've had some bad luck with ready-to-run images in the past. There always seems to be some kind of issue with different network drivers or installed guest additions.
I don't know the current status of ova files though. Is that the current standard for distributing virtual appliances that are supposed to run on every common virtualization platform? Or what just in theory?
Yup, exactly. Consistently I still need OS control. Whether VM or container, that never works.
-
@scottalanmiller said in Virtual appliances?:
@stacksofplates said in Virtual appliances?:
@JaredBusch said in Virtual appliances?:
@stacksofplates said in Virtual appliances?:
This day and age Id just prefer a container. They're so much easier to deploy and manage.
Only when done right, which is still not often, IMO.
That argument could be made for pretty much anything though. I think even on a single host it's easier to manage.
True. I think the problem is that Docker feels like it's never set up correctly for third party application deployments. As a tech it's amazing, in the real world, it seems to result it devs bypassing all operational oversight and apps that have good code and no production way to deploy.
What do you mean about third party applications? That's pretty much what most people use it for unless you're an enterprise and writing micro services.
There isn't any need for operational oversight of devs because it's all done through things like merge/pull requests. Then tools like Flux/Argo/whatever deploy it for you.
I'm not sure what you mean about no production way to deploy. Automated pipelines are a more production way that just installing packages in systems. You have easier rollback, easier ways to apply seccomp profiles, resources, etc. Its very production ready.
-
@travisdh1 said in Virtual appliances?:
@stacksofplates said in Virtual appliances?:
I think it's more of the application now. If it was something designed for Windows 2003 and you put it in a container it wouls be terrible, but it would also be terrible installed normally. K8s is so easy to set up now that it's trivial to get things going. Even if you just use podman and systemd I think it's a step above installing the application in a VM.
I need to try out K8s again. The first time I tried using it was early days and a pain. From what your saying it's a lot better/easier now.
For remote systems, k3s is probably easiest.
curl -sfL https://get.k3s.io | sh -
Run that and you have k8s.For local work, kind is probably easiest. It runs containers as k8s nodes that then run Docker so you can deploy to them. It works really well.
-
@stacksofplates said in Virtual appliances?:
@scottalanmiller said in Virtual appliances?:
@stacksofplates said in Virtual appliances?:
@JaredBusch said in Virtual appliances?:
@stacksofplates said in Virtual appliances?:
This day and age Id just prefer a container. They're so much easier to deploy and manage.
Only when done right, which is still not often, IMO.
That argument could be made for pretty much anything though. I think even on a single host it's easier to manage.
True. I think the problem is that Docker feels like it's never set up correctly for third party application deployments. As a tech it's amazing, in the real world, it seems to result it devs bypassing all operational oversight and apps that have good code and no production way to deploy.
What do you mean about third party applications? That's pretty much what most people use it for unless you're an enterprise and writing micro services.
There isn't any need for operational oversight of devs because it's all done through things like merge/pull requests. Then tools like Flux/Argo/whatever deploy it for you.
I'm not sure what you mean about no production way to deploy. Automated pipelines are a more production way that just installing packages in systems. You have easier rollback, easier ways to apply seccomp profiles, resources, etc. Its very production ready.
I think there is a big difference in the production environment of say a SaaS company compared to the rest of the companies that are not in the software business.
CI/CD pipelines seems highly unlikely in a company that doesn't develop software or provide software services. Why would they have that?
If you have enough workloads you need automation tools to deploy patches and administrate your environment but that is a different thing and something all environments of size needs.
-
@Pete-S said in Virtual appliances?:
@stacksofplates said in Virtual appliances?:
@scottalanmiller said in Virtual appliances?:
@stacksofplates said in Virtual appliances?:
@JaredBusch said in Virtual appliances?:
@stacksofplates said in Virtual appliances?:
This day and age Id just prefer a container. They're so much easier to deploy and manage.
Only when done right, which is still not often, IMO.
That argument could be made for pretty much anything though. I think even on a single host it's easier to manage.
True. I think the problem is that Docker feels like it's never set up correctly for third party application deployments. As a tech it's amazing, in the real world, it seems to result it devs bypassing all operational oversight and apps that have good code and no production way to deploy.
What do you mean about third party applications? That's pretty much what most people use it for unless you're an enterprise and writing micro services.
There isn't any need for operational oversight of devs because it's all done through things like merge/pull requests. Then tools like Flux/Argo/whatever deploy it for you.
I'm not sure what you mean about no production way to deploy. Automated pipelines are a more production way that just installing packages in systems. You have easier rollback, easier ways to apply seccomp profiles, resources, etc. Its very production ready.
I think there is a big difference in the production environment of say a SaaS company compared to the rest of the companies that are not in the software business.
CI/CD pipelines seems highly unlikely in a company that doesn't develop software or provide software services. Why would they have that?
If you have enough workloads you need automation tools to deploy patches and administrate your environment but that is a different thing and something all environments of size needs.
SaaS companies aren't the only ones with internal development. Pretty much any fortune 1000 and up has that.
But yes pipelines are mostly for internal development. But you also can just deploy containers the same way. If you aren't using a CD tool to deploy the update containers automatically, you would have a merge/pull request with the new container tag. The same idea applies, just not with the CI part.
Its not about enough workloads to automate deployment. It takes almost no effort to automate container deployments. You run a helm install command against your cluster to set Flux up and then have it read a couple yaml files. Its less work to do that than update software the old way.
-
@JaredBusch said in Virtual appliances?:
@stacksofplates said in Virtual appliances?:
@JaredBusch said in Virtual appliances?:
@stacksofplates said in Virtual appliances?:
This day and age Id just prefer a container. They're so much easier to deploy and manage.
Only when done right, which is still not often, IMO.
Btw not trying to argue with you. People def do it wrong. I'm just saying I've seen them do it wrong with VMs too.
Oh I completely understand. Docker is super abused though.
What do you mean by abused?
-
Here's an example. To set up Flux you run these couple commands:
helm repo add fluxcd https://charts.fluxcd.io
kubectl apply -f https://raw.githubusercontent.com/fluxcd/helm-operator/master/deploy/crds.yaml
kubectl create namespace flux
helm upgrade -i flux fluxcd/flux \ --set [email protected]:user/some-repo \ --namespace flux
that sets up Flux. Flux is now watching the repo you told it there in the last command.
If you don't use a predefined key, you just grab the SSH key Flux created and add it to your repo.
Then to deploy something like NextCloud, you need these two files. The first creates a namespace for nextcloud. Not a requirement, but makes sense. The second is a HelmRelease file that the Flux Helm Operator uses to read the Helm chart for NextCloud.
apiVersion: v1 kind: Namespace metadata: name: nextcloud
apiVersion: helm.fluxcd.io/v1 kind: HelmRelease metadata: name: nextcloud namespace: nextcloud annotations: fluxcd.io/automated: "true" filter.fluxcd.io/chart-image: "glob:*" spec: releaseName: nextcloud chart: repository: https://nextcloud.github.io/helm/ name: nextcloud values: replicaCount: 2 any other values here to override in the chart
that's it. You now have a fully automated system that will automatically deploy the new updates to your NextCloud pods. You can disable the auto updates by removing the annotations and then manually update the container versions by adding the version in the HelmRelease. Once it's approved, then Flux will update the containers.
You also have have a deployment that created a replicaset of your pod because you defined 2 for your replicacount. So any traffic entering your cluster will be split between both replicas (or more if you define more). By default, k8s does a rolling update. So pods aren't all killed at once. The first pod will be terminated and a new one spun up with the updates. When it's live, the second will be terminated and recreated with the updates. So your service stays live during updates.
It's that easy. It shouldn't take you more than 10 minutes to set Flux up. And then the rest is the specific things you need the apps to do. Like with NextCloud the type of database, if you want ingress or not, those kinds of options.
Containers and container orchestrators help literally every business from small to giant enterprises developing hundreds to thousands of internal microservices.
I don't even have some things installed on my system anymore. I'll just run a container to use a specific tool and kill the container when I'm done. You can even have full dev environments packaged up in a contianer and have VSCode deploy itself in the container so you have a consistent development environment across different users. And that happens literally with the push of a button in VSCode.
-
@stacksofplates What the what?
- Install Fedora
sudo dnf install -y kubernetes
- `systemctl enable --now podman1
That's all it takes.