KVM Backing and Support
-
@scottalanmiller said in KVM Backing and Support:
@obsolesce said in KVM Backing and Support:
But it's so convenient and easy to be able to back up (agentless) VMs at the hypervisor level with the ability to restore files within VMs like you can with Hyper-V backup solutions.
- Is it? How is "so convenient" really important in IT? Unless you can put a dollar value on that convenience, it's not relevant.
- It comes at a cost, a cost of reliability and performance. I see loads of shops getting useless backups because they thought convenience trumped "working". It encourages lazy, bad backups and processes.
- Once you do all the due diligence and effort to get good backups the difference in effort between agentless and agent is generally nominal.
Veeam does what I'm talking about and does reliable backups... as one example.
The time you save and what you get is worth it for an SMB. It doesn't cost much in that case. Essentially, Essentials. If the cost is higher then other options should be evaluated.
-
@obsolesce because Scott has decided that this is his new shiny thing and you will never dissuade him.
-
There are merits to both sides. For example we do have a lot "backed up" in Git. Things like DHCP servers, DNS servers, web servers, etc that don't have stateful data are stored in Git. Then that Git server is obviously backed up. And you get a little extra redundancy since Git is distributed by nature. We do "agent" based but only because everything is under some type of CM. So it's easy to just make sure that system has the agent's backup role applied to it and that's done automatically.
But I can also see how small shops with not much help would to spend a small amount of money and be able to do agentless with not much extra work.
-
It also bothers me to no end that the systems we use to store our most important data (databases) have the least backup (and redundancy) options. I try to use solutions that rely on them as little as possible (that's why I use things like Grav).
-
@stacksofplates said in KVM Backing and Support:
It also bothers me to no end that the systems we use to store our most important data (databases) have the least backup (and redundancy) options. I try to use solutions that rely on them as little as possible (that's why I use things like Grav).
This is also why I like Elasticsearch so much. Clustering is super easy and so are snapshots/backups.
-
@scottalanmiller said in KVM Backing and Support:
They might take more effort and skill
They tend to require more care and feeding, and by structure tend to use a lot more space (Just using application level backup tools produces fulls on every backup job, does a ton more IO, and layering this with LVM snapshot shipping and volume shipping leads to tons of redundant copies of data vs. using something like Commvault that will dedupe everything in a pool. Because of the overhead and costs you often don't see very granular RPO's vs. something that has a journal log and can DVR style replay (Like TimeFinder, or RecoverPoint).
https://about.gitlab.com/2017/02/10/postmortem-of-database-outage-of-january-31/
Saying anyone that has a problem with recovery "isn't doing them right" is kind of a no true Scotsman argument.
While plenty of large shops "do it right" (Google etc) I think pointing to the processes that people who have 100K server instances doing the same thing as how a SMB should run can quickly turn into the "cargo cult of the cloud".Lage shops (like my employer) tend to employ a mix. Data that can be recreated, or lacks compliance requirements, and is large analytic cloud data I can see going down that route. Some webserver and SQL VMs? Those are going to get a traditional backup tool.
-
@stacksofplates said in KVM Backing and Support:
@stacksofplates said in KVM Backing and Support:
It also bothers me to no end that the systems we use to store our most important data (databases) have the least backup (and redundancy) options. I try to use solutions that rely on them as little as possible (that's why I use things like Grav).
This is also why I like Elasticsearch so much. Clustering is super easy and so are snapshots/backups.
It's a bit unfair to compare a cloud native No-SQL applications that can play fast and lose with ACID consistency on it's native capabilities against a relational database that's core engine was designed in the 1980's and has a mission to "never loose a transaction at any cost". I do think more data goes into RDMS's than needs to be. Even if I"m going to use something like Casandra I'd consider running a packaged build with added tools for backup/recovery operations (Datastax?) just as it simplifies the admin overhead.
If every application requires bespoke skills to backup, DR and recover this is going to lead to crazy opex overhead. In many enterprises you could have hundreds or thousands of applications. At a certain point you start shoving everything into square holes (Java or .NET, backed by Oracle or M-SQL) so you can manage lifecycle. It's the same reason people run Hadoop in VM's. The average Hadoop instance is only 12TB and operatializing a new bare metal environment and creating another management silo is far worse than the overhead of putting that in a VM even if running 1:1. There is a balance, but I find people tend to over correct. Large enterprises shove everything into few platforms, and SMB's often start sprawling sooner than they should.
-
@storageninja said in KVM Backing and Support:
@stacksofplates said in KVM Backing and Support:
@stacksofplates said in KVM Backing and Support:
It also bothers me to no end that the systems we use to store our most important data (databases) have the least backup (and redundancy) options. I try to use solutions that rely on them as little as possible (that's why I use things like Grav).
This is also why I like Elasticsearch so much. Clustering is super easy and so are snapshots/backups.
It's a bit unfair to compare a cloud native No-SQL applications that can play fast and lose with ACID consistency on it's native capabilities against a relational database that's core engine was designed in the 1980's and has a mission to "never loose a transaction at any cost". I do think more data goes into RDMS's than needs to be. Even if I"m going to use something like Casandra I'd consider running a packaged build with added tools for backup/recovery operations (Datastax?) just as it simplifies the admin overhead.
If every application requires bespoke skills to backup, DR and recover this is going to lead to crazy opex overhead. In many enterprises you could have hundreds or thousands of applications. At a certain point you start shoving everything into square holes (Java or .NET, backed by Oracle or M-SQL) so you can manage lifecycle. It's the same reason people run Hadoop in VM's. The average Hadoop instance is only 12TB and operatializing a new bare metal environment and creating another management silo is far worse than the overhead of putting that in a VM even if running 1:1. There is a balance, but I find people tend to over correct. Large enterprises shove everything into few platforms, and SMB's often start sprawling sooner than they should.
True, but I was more saying that we try to choose solutions that use things like Elasticsearch vs using something else. Obviously that doesn't always work, but we do that for the exact same reason as you mentioned. We can use Graylog and other tools that use Elasticsearch and get nice looking graphs from Grafana from the same tool (just a simple example). I was meaning to make this point
shove everything into few platforms
but did a bad job of it I guess.
-
@stacksofplates said in KVM Backing and Support:
There are merits to both sides. For example we do have a lot "backed up" in Git. Things like DHCP servers, DNS servers, web servers, etc that don't have stateful data are stored in Git. Then that Git server is obviously backed up. And you get a little extra redundancy since Git is distributed by nature. We do "agent" based but only because everything is under some type of CM. So it's easy to just make sure that system has the agent's backup role applied to it and that's done automatically.
But I can also see how small shops with not much help would to spend a small amount of money and be able to do agentless with not much extra work.
The other thing that I think people loose track of in their "war on state sprawl" is that most companies don't control the code they have deployed. 75% of code in large enterprises they don't own. You can do platform migrations to open source, and hire developers to do this but if the alternative is $1000 a host for a Veeam license you will get laughed out of the meeting by anyone who's done an ERP migration.
Realistically the easiest way to get rid of backup headaches is to make them someone else's problem. Use SaaS applications, and if it makes sense use SaaS Backup products (Spanning). If the person who owns the code is delivering it, ideally they should be able to achieve enough scale to make custom protection work, or aggregate enough demand to have more leverage with the backup vendors they purchase from.
-
@stacksofplates said in KVM Backing and Support:
shove everything into few platforms
but did a bad job of it I guess.
Anyone who uses Microsoft SQL for a log analytic platform did a bad job of it
Hilarious cost scaling issues, and backups with high change rate get fun.For log analytic situations where data sovereignty isn't a concern, rather than a SMB learn Elasticsearch (which isn't bad to be fair) they could also just use a SaaS provider. SumoLogic, or Log Inteligence (we just launched), Splunk (if they have lots of gold pressed latinum).etc
-
@storageninja said in KVM Backing and Support:
@stacksofplates said in KVM Backing and Support:
shove everything into few platforms
but did a bad job of it I guess.
Anyone who uses Microsoft SQL for a log analytic platform did a bad job of it
Hilarious cost scaling issues, and backups with high change rate get fun.For log analytic situations where data sovereignty isn't a concern, rather than a SMB learn Elasticsearch (which isn't bad to be fair) they could also just use a SaaS provider. SumoLogic, or Log Inteligence (we just launched), Splunk (if they have lots of gold pressed latinum).etc
Sadly I know some people that do just that. And have nothing but problems.
The nice thing about Graylog is there isn't much to learn. Install the components and start the services. But yeah it's definitely simpler to ship off if you have that ability.
-
@storageninja said in KVM Backing and Support:
@stacksofplates said in KVM Backing and Support:
There are merits to both sides. For example we do have a lot "backed up" in Git. Things like DHCP servers, DNS servers, web servers, etc that don't have stateful data are stored in Git. Then that Git server is obviously backed up. And you get a little extra redundancy since Git is distributed by nature. We do "agent" based but only because everything is under some type of CM. So it's easy to just make sure that system has the agent's backup role applied to it and that's done automatically.
But I can also see how small shops with not much help would to spend a small amount of money and be able to do agentless with not much extra work.
The other thing that I think people loose track of in their "war on state sprawl" is that most companies don't control the code they have deployed. 75% of code in large enterprises they don't own. You can do platform migrations to open source, and hire developers to do this but if the alternative is $1000 a host for a Veeam license you will get laughed out of the meeting by anyone who's done an ERP migration.
Realistically the easiest way to get rid of backup headaches is to make them someone else's problem. Use SaaS applications, and if it makes sense use SaaS Backup products (Spanning). If the person who owns the code is delivering it, ideally they should be able to achieve enough scale to make custom protection work, or aggregate enough demand to have more leverage with the backup vendors they purchase from.
10000%. If you have the option to use someone else's systems, do it. However while most things in our group are open source, our ERP is all tied into Oracle and a lot of that is delivered with APEX. To get out of that mess would cost astronomical amounts so it's still there.
-
@storageninja said in KVM Backing and Support:
@scottalanmiller said in KVM Backing and Support:
They might take more effort and skill
They tend to require more care and feeding, and by structure tend to use a lot more space (Just using application level backup tools produces fulls on every backup job, does a ton more IO, and layering this with LVM snapshot shipping and volume shipping leads to tons of redundant copies of data vs. using something like Commvault that will dedupe everything in a pool. Because of the overhead and costs you often don't see very granular RPO's vs. something that has a journal log and can DVR style replay (Like TimeFinder, or RecoverPoint).
https://about.gitlab.com/2017/02/10/postmortem-of-database-outage-of-january-31/
Saying anyone that has a problem with recovery "isn't doing them right" is kind of a no true Scotsman argument.
While plenty of large shops "do it right" (Google etc) I think pointing to the processes that people who have 100K server instances doing the same thing as how a SMB should run can quickly turn into the "cargo cult of the cloud".Lage shops (like my employer) tend to employ a mix. Data that can be recreated, or lacks compliance requirements, and is large analytic cloud data I can see going down that route. Some webserver and SQL VMs? Those are going to get a traditional backup tool.
Also there isn't an "Enterprise solution" like huge companies have one team of ops people that work on everything. If you look at larger companies there are tons of smaller teams writing their own solutions. Places like Netflix encourage that and you can write whatever solution you want as long as it fits and meets API requirements, backup strategies, health checks, etc. There isn't a specific Enterprise way of doing things.
-
@storageninja said in KVM Backing and Support:
@stacksofplates said in KVM Backing and Support:
@stacksofplates said in KVM Backing and Support:
It also bothers me to no end that the systems we use to store our most important data (databases) have the least backup (and redundancy) options. I try to use solutions that rely on them as little as possible (that's why I use things like Grav).
This is also why I like Elasticsearch so much. Clustering is super easy and so are snapshots/backups.
It's a bit unfair to compare a cloud native No-SQL applications that can play fast and lose with ACID consistency on it's native capabilities against a relational database that's core engine was designed in the 1980's and has a mission to "never loose a transaction at any cost". I do think more data goes into RDMS's than needs to be. Even if I"m going to use something like Casandra I'd consider running a packaged build with added tools for backup/recovery operations (Datastax?) just as it simplifies the admin overhead.
That was kind of what I meant to point out. Those systems have been around for so long that they've had that amount of time to build in a native replication system (not just things like Galera). Postgres has something but I've never tried. It just seems that if you've been around for 30 years you could have an easier replication set up than currently exists.
-
@stacksofplates said in KVM Backing and Support:
@storageninja said in KVM Backing and Support:
@stacksofplates said in KVM Backing and Support:
@stacksofplates said in KVM Backing and Support:
It also bothers me to no end that the systems we use to store our most important data (databases) have the least backup (and redundancy) options. I try to use solutions that rely on them as little as possible (that's why I use things like Grav).
This is also why I like Elasticsearch so much. Clustering is super easy and so are snapshots/backups.
It's a bit unfair to compare a cloud native No-SQL applications that can play fast and lose with ACID consistency on it's native capabilities against a relational database that's core engine was designed in the 1980's and has a mission to "never loose a transaction at any cost". I do think more data goes into RDMS's than needs to be. Even if I"m going to use something like Casandra I'd consider running a packaged build with added tools for backup/recovery operations (Datastax?) just as it simplifies the admin overhead.
That was kind of what I meant to point out. Those systems have been around for so long that they've had that amount of time to build in a native replication system (not just things like Galera). Postgres has something but I've never tried. It just seems that if you've been around for 30 years you could have an easier replication set up than currently exists.
The other problem with systems like this is their testing is very basic. Often simply checksums, or unit testing and not testing of a group of applications and VM's that require function to restore and hit a RPO point. If I"m using SRM or Veeam I can easily do an automated test and spin up a group of 10 VM's that make up the full dependency chain and make sure that a test can be done.
If I'm just scripting backups of PostGres DB I'm at the mercy of my entire build toolchain to do a full stack test (which is a massive non-trivial amount of IO and time vs. SureBackup labs, or linked clones triggered by SRM).
-
@scottalanmiller said in KVM Backing and Support:
@jaredbusch said in KVM Backing and Support:
@scottalanmiller said in KVM Backing and Support:
Use agent based,
Screw that shit. Let's just jump back to 1999 shall we?
It's not a jump back, it's sticking with the more enterprise solution. Agentless is limited in scope and requires support at the hypervisor, OS, and application level. Essentially no enterprise shop can use it, as there is no agentless system that supports the range of apps that shops use. So no enterprise has moved to agentless. Many use it as an "extra" piece, making backups more complex and more expensive, rather than less.
Really, for the time being, agentless is mostly just marketing hype. So jumping to "tried and true" rather than "sounds impressive and is rarely thought through" is exactly what we should want.
My biggest concerns w/ agent based are:
- A nas IS cheap don't bore too much about space. Just backup.
- is there a cheap solution with centraluzed management of backups? Cross platform?
Any hints?!
-
@matteo-nunziati said in KVM Backing and Support:
A nas IS cheap don't bore too much about space. Just backup.
is there a cheap solution with centraluzed management of backups? Cross platform?For primary backup target I"m becoming less in love with Cheap NAS's at sacle.
- Restore performance is terrible if it's anything but a few files or a small VM.
- At large scale I've seen data integrity errors (especially ones that do non-MDRAID implementations).
-
@matteo-nunziati said in KVM Backing and Support:
My biggest concerns w/ agent based are:
- A nas IS cheap don't bore too much about space. Just backup.
- is there a cheap solution with centraluzed management of backups? Cross platform?
Any hints?!
-
The storage component is not related to agent vs. agentless. I'm not sure what you are asking here. You need a place to store the backups identically between different backup approaches.
-
Agent based is the norm, agentless is the niche. There are 10-100 options of agent based for every agentless one. And the big players, like Veeam, Unitrends, etc offer both. It's "how you deploy the product", not what product you choose in many cases. And yes, there are free options.
-
What does cross platform mean in this context?
-
Why do you worry about these things with agent based and not with agentless, even though they are equal and both affected by them just the same?
-
@storageninja said in KVM Backing and Support:
For log analytic situations where data sovereignty isn't a concern, rather than a SMB learn Elasticsearch (which isn't bad to be fair) they could also just use a SaaS provider. SumoLogic, or Log Inteligence (we just launched), Splunk (if they have lots of gold pressed latinum).etc
Yeah, but to be fair, it takes like two months of something like Splunk to pay for someone competent to learn ElasticSearch. It's very hard to justify SaaS in that space.
-
@stacksofplates said in KVM Backing and Support:
@storageninja said in KVM Backing and Support:
@stacksofplates said in KVM Backing and Support:
There are merits to both sides. For example we do have a lot "backed up" in Git. Things like DHCP servers, DNS servers, web servers, etc that don't have stateful data are stored in Git. Then that Git server is obviously backed up. And you get a little extra redundancy since Git is distributed by nature. We do "agent" based but only because everything is under some type of CM. So it's easy to just make sure that system has the agent's backup role applied to it and that's done automatically.
But I can also see how small shops with not much help would to spend a small amount of money and be able to do agentless with not much extra work.
The other thing that I think people loose track of in their "war on state sprawl" is that most companies don't control the code they have deployed. 75% of code in large enterprises they don't own. You can do platform migrations to open source, and hire developers to do this but if the alternative is $1000 a host for a Veeam license you will get laughed out of the meeting by anyone who's done an ERP migration.
Realistically the easiest way to get rid of backup headaches is to make them someone else's problem. Use SaaS applications, and if it makes sense use SaaS Backup products (Spanning). If the person who owns the code is delivering it, ideally they should be able to achieve enough scale to make custom protection work, or aggregate enough demand to have more leverage with the backup vendors they purchase from.
10000%. If you have the option to use someone else's systems, do it. However while most things in our group are open source, our ERP is all tied into Oracle and a lot of that is delivered with APEX.
APEX is just Access for people with deep pockets and no clue.