KVM Backing and Support
-
@storageninja said in KVM Backing and Support:
@scottalanmiller said in KVM Backing and Support:
Actually, no one says that. No one.
The point is smaller, faster, more focused backups of relevant data. Not loads of fluff to sell more hardware.I've had this conversation with a shocking amount of developers who thought Cloud=automated unlimited backup and DR.
Well, DevOps is never developers. Any developer using that term is confused and that could apply to any lay person. Talking to any end user about backups makes no sense, that's no in their scope. In IT, DevOps always has backups.
-
@scottalanmiller said in KVM Backing and Support:
there is no agentless system that supports the range of apps that shops use
Veeam can hit Exchange/SQL/AD/Oracle/Windows/Linux (OS and file). For a lot of SMB's that's their estate.
-
@storageninja said in KVM Backing and Support:
@scottalanmiller said in KVM Backing and Support:
there is no agentless system that supports the range of apps that shops use
Veeam can hit Exchange/SQL/AD/Oracle/Windows/Linux (OS and file). For a lot of SMB's that's their estate.
Not as many as you'd think. I know almost zero. Someone will find an example case, but they will be so small that it's just silly to mention. In the real world, even tiny SMBs deal with things like QuickBooks, Sage, SAP, MySQL, etc. that aren't covered by Veeam.
-
@scottalanmiller said in KVM Backing and Support:
@storageninja said in KVM Backing and Support:
@scottalanmiller said in KVM Backing and Support:
Actually, no one says that. No one.
The point is smaller, faster, more focused backups of relevant data. Not loads of fluff to sell more hardware.I've had this conversation with a shocking amount of developers who thought Cloud=automated unlimited backup and DR.
Well, DevOps is never developers. Any developer using that term is confused and that could apply to any lay person. Talking to any end user about backups makes no sense, that's no in their scope. In IT, DevOps always has backups.
These tend to be in shops or departments where you have "Developers gone wild" where they don't have IT functionally. Just a credit card and a public cloud
-
@storageninja said in KVM Backing and Support:
@scottalanmiller said in KVM Backing and Support:
@storageninja said in KVM Backing and Support:
@scottalanmiller said in KVM Backing and Support:
Actually, no one says that. No one.
The point is smaller, faster, more focused backups of relevant data. Not loads of fluff to sell more hardware.I've had this conversation with a shocking amount of developers who thought Cloud=automated unlimited backup and DR.
Well, DevOps is never developers. Any developer using that term is confused and that could apply to any lay person. Talking to any end user about backups makes no sense, that's no in their scope. In IT, DevOps always has backups.
These tend to be in shops or departments where you have "Developers gone wild" where they don't have IT functionally. Just a credit card and a public cloud
Well, that's fine, but a shop without IT isn't one you really talk to about backups. That's like saying "I asked my grandparents and they didn't feel that they needed backups." No one is doubting that non-technical or end users don't understand data protection. But that's not related to IT issues of believing backups aren't needed.
Any shop that thinks that they don't need IT is going to have a lot of crazy notions. They probably don't believe in passwords, either.
-
@scottalanmiller said in KVM Backing and Support:
QuickBooks, Sage, SAP, MySQL
Quickbooks hosted game has gotten a lot better (My last employeer had migrated to it).
Sage uses Microsoft SQL for some of their apps, or they have apps that dump a local copy on a schedule so a block consistent backup of the file is good enough. MySQL (or Maria) I used to see more of as part of LAMP stacks, but honestly where I saw it used in SMB's no one was managing the MySQL in any meaningful way and it was generally crash consistent (or it had a rotation of a local dump that was captured in a crash consistent copy of the VM). Veeam can run scripts to a VM before and after backups if you want to put a database in hot standby mode (What we did for some more weird databases) so when you recover the VM you know the database will be consistent. Worst case you can do a stop service before backup and resume afterwards script.SAP I generally see hosted for SMB, although more SAP runs on Microsoft SQL than Oracle (of which Veeam can backup both). HANA is a different beast but if your running HANA in a SMB you are an outlier (HANA is only certified on appliances, so I suspect most SMB's consuming it would do so as a service from somewhere else).
-
@scottalanmiller said in KVM Backing and Support:
Any shop that thinks that they don't need IT is going to have a lot of crazy notions. They probably don't believe in passwords, either.
Nah, you just use API keys that hopefully bill didn't check into Github and rack up a 50K bill in an hour.
-
@storageninja said in KVM Backing and Support:
SAP I generally see hosted for SMB, although more SAP runs on Microsoft SQL than Oracle (of which Veeam can backup both). HANA is a different beast but if your running HANA in a SMB you are an outlier (HANA is only certified on appliances, so I suspect most SMB's consuming it would do so as a service from somewhere else).
A lot of small SMB still operate in places without reliable internet, so lots of it is not hosted.
SAP primarily runs on HANA, not SQL Server or Oracle, those are legacy deployments that haven't updated yet. SAP makes their own database, and it only runs on Linux. HANA isn't appliance only, in our latest SAP talk (two weeks ago) SAP didn't even offer appliances, local install to Linux servers only.
HANA is more affordable for SAP than any other offering, so a pretty big deal for SMBs. Only larger shops can afford SAP without HANA, and as HANA is the high performance, recommended path, makes the least sense for larger shops to do the less supported and performing offering.
-
@storageninja said in KVM Backing and Support:
). Veeam can run scripts to a VM before and after backups if you want to put a database in hot standby mode (What we did for some more weird databases) so when you recover the VM you know the database will be consistent. Worst case you can do a stop service before backup and resume afterwards script.
Sure, but at some point, we've lost the benefits versus agent based or just scripted and we are only deploying the backup infrastructure in that way to prove a point - which is not ITs job to do.
-
I can't argue the merits of DevOps Backups, but this article seems to agree with @JaredBusch and @StorageNinja that "DevOps backups" are a horrible idea.
-
@scottalanmiller said in KVM Backing and Support:
@storageninja said in KVM Backing and Support:
). Veeam can run scripts to a VM before and after backups if you want to put a database in hot standby mode (What we did for some more weird databases) so when you recover the VM you know the database will be consistent. Worst case you can do a stop service before backup and resume afterwards script.
Sure, but at some point, we've lost the benefits versus agent based or just scripted and we are only deploying the backup infrastructure in that way to prove a point - which is not ITs job to do.
Isn't it exclusively IT"s job to deploy the backup in a way to prove that what is being backed up is consistent and usable?
I mean, to sound like a fool here "we" have audits to meet and certainly one of those annoying questions will be "do you produce consistent backups?"
-
@dustinb3403 said in KVM Backing and Support:
I can't argue the merits of DevOps Backups, but this article seems to agree with @JaredBusch and @StorageNinja that "DevOps backups" are a horrible idea.
That article says nothing of the sort. In fact, it highlights the opposite. It even points out that DevOps approaches will work where traditional backups have less or no chance.
There's no risk to DevOps style backups. If there appears to be, they are misunderstood. They might take more effort and skill, they don't allow for lazy "just click and button and hope for the best" approaches commonly promoted as the purpose for other approaches, but when you do them, nothing is more reliable.
-
@dustinb3403 said in KVM Backing and Support:
@scottalanmiller said in KVM Backing and Support:
@storageninja said in KVM Backing and Support:
). Veeam can run scripts to a VM before and after backups if you want to put a database in hot standby mode (What we did for some more weird databases) so when you recover the VM you know the database will be consistent. Worst case you can do a stop service before backup and resume afterwards script.
Sure, but at some point, we've lost the benefits versus agent based or just scripted and we are only deploying the backup infrastructure in that way to prove a point - which is not ITs job to do.
Isn't it exclusively IT"s job to deploy the backup in a way to prove that what is being backed up is consistent and usable?
Once you are dedicated to doing backups well, why not take the trivial additional effort to do them "really well"? That's kind of the point. Modern DevOps style backups aren't that much harder. Agentless only gets really easy when you use it to avoid really testing.
-
@scottalanmiller said in KVM Backing and Support:
@obsolesce said in KVM Backing and Support:
But it's so convenient and easy to be able to back up (agentless) VMs at the hypervisor level with the ability to restore files within VMs like you can with Hyper-V backup solutions.
- Is it? How is "so convenient" really important in IT? Unless you can put a dollar value on that convenience, it's not relevant.
- It comes at a cost, a cost of reliability and performance. I see loads of shops getting useless backups because they thought convenience trumped "working". It encourages lazy, bad backups and processes.
- Once you do all the due diligence and effort to get good backups the difference in effort between agentless and agent is generally nominal.
Veeam does what I'm talking about and does reliable backups... as one example.
The time you save and what you get is worth it for an SMB. It doesn't cost much in that case. Essentially, Essentials. If the cost is higher then other options should be evaluated.
-
@obsolesce because Scott has decided that this is his new shiny thing and you will never dissuade him.
-
There are merits to both sides. For example we do have a lot "backed up" in Git. Things like DHCP servers, DNS servers, web servers, etc that don't have stateful data are stored in Git. Then that Git server is obviously backed up. And you get a little extra redundancy since Git is distributed by nature. We do "agent" based but only because everything is under some type of CM. So it's easy to just make sure that system has the agent's backup role applied to it and that's done automatically.
But I can also see how small shops with not much help would to spend a small amount of money and be able to do agentless with not much extra work.
-
It also bothers me to no end that the systems we use to store our most important data (databases) have the least backup (and redundancy) options. I try to use solutions that rely on them as little as possible (that's why I use things like Grav).
-
@stacksofplates said in KVM Backing and Support:
It also bothers me to no end that the systems we use to store our most important data (databases) have the least backup (and redundancy) options. I try to use solutions that rely on them as little as possible (that's why I use things like Grav).
This is also why I like Elasticsearch so much. Clustering is super easy and so are snapshots/backups.
-
@scottalanmiller said in KVM Backing and Support:
They might take more effort and skill
They tend to require more care and feeding, and by structure tend to use a lot more space (Just using application level backup tools produces fulls on every backup job, does a ton more IO, and layering this with LVM snapshot shipping and volume shipping leads to tons of redundant copies of data vs. using something like Commvault that will dedupe everything in a pool. Because of the overhead and costs you often don't see very granular RPO's vs. something that has a journal log and can DVR style replay (Like TimeFinder, or RecoverPoint).
https://about.gitlab.com/2017/02/10/postmortem-of-database-outage-of-january-31/
Saying anyone that has a problem with recovery "isn't doing them right" is kind of a no true Scotsman argument.
While plenty of large shops "do it right" (Google etc) I think pointing to the processes that people who have 100K server instances doing the same thing as how a SMB should run can quickly turn into the "cargo cult of the cloud".Lage shops (like my employer) tend to employ a mix. Data that can be recreated, or lacks compliance requirements, and is large analytic cloud data I can see going down that route. Some webserver and SQL VMs? Those are going to get a traditional backup tool.
-
@stacksofplates said in KVM Backing and Support:
@stacksofplates said in KVM Backing and Support:
It also bothers me to no end that the systems we use to store our most important data (databases) have the least backup (and redundancy) options. I try to use solutions that rely on them as little as possible (that's why I use things like Grav).
This is also why I like Elasticsearch so much. Clustering is super easy and so are snapshots/backups.
It's a bit unfair to compare a cloud native No-SQL applications that can play fast and lose with ACID consistency on it's native capabilities against a relational database that's core engine was designed in the 1980's and has a mission to "never loose a transaction at any cost". I do think more data goes into RDMS's than needs to be. Even if I"m going to use something like Casandra I'd consider running a packaged build with added tools for backup/recovery operations (Datastax?) just as it simplifies the admin overhead.
If every application requires bespoke skills to backup, DR and recover this is going to lead to crazy opex overhead. In many enterprises you could have hundreds or thousands of applications. At a certain point you start shoving everything into square holes (Java or .NET, backed by Oracle or M-SQL) so you can manage lifecycle. It's the same reason people run Hadoop in VM's. The average Hadoop instance is only 12TB and operatializing a new bare metal environment and creating another management silo is far worse than the overhead of putting that in a VM even if running 1:1. There is a balance, but I find people tend to over correct. Large enterprises shove everything into few platforms, and SMB's often start sprawling sooner than they should.