XenServer Export Performance Seems Poor
-
@Dashrender said:
@scottalanmiller said:
@Dashrender said:
Oh well sure - I can definitely make it role back on reboots - though, now that I think about it.. I can't do that either.. because while I want the data itself to be read only - I need access logs. Those logs are part of the system itself. Now those logs are in the SQL DB, and I could just backup the SQL DB or find a way to export them and only backup that part... then worry less about the rest...
So it is NOT read only, hence the problem.
Continuing discussion has brought this to the surface.. I wasn't intentionally just not mentioning it earlier.. so yeah... that part at least is not read only.
Can the logs just go elsewhere? ELK for example?
-
@scottalanmiller said:
@Dashrender said:
@scottalanmiller said:
@Dashrender said:
Oh well sure - I can definitely make it role back on reboots - though, now that I think about it.. I can't do that either.. because while I want the data itself to be read only - I need access logs. Those logs are part of the system itself. Now those logs are in the SQL DB, and I could just backup the SQL DB or find a way to export them and only backup that part... then worry less about the rest...
So it is NOT read only, hence the problem.
Continuing discussion has brought this to the surface.. I wasn't intentionally just not mentioning it earlier.. so yeah... that part at least is not read only.
Can the logs just go elsewhere? ELK for example?
If I pay a developer to learn how it works - sure it could.
-
Where are the logs going now?
-
If I recall correctly @Dashrender said the files on this server aren't the critical point for it, IE they are used when they are created and then put away into the storage on the VM.
Which is that's the case, why not limit the size of the VM, allowing for a faster recovery of the VM, and then piece meal restore the data as it's needed from something like a Synology NAS.
My point is, the VM as describe is only 700GB because it was allowed to grow to this size, but it could be a meager 150GB.
-
@scottalanmiller said:
Where are the logs going now?
Into the SQL DB on the server.
same place where the EHR data lives. -
@DustinB3403 said:
If I recall correctly @Dashrender said the files on this server aren't the critical point for it, IE they are used when they are created and then put away into the storage on the VM.
Which is that's the case, why not limit the size of the VM, allowing for a faster recovery of the VM, and then piece meal restore the data as it's needed from something like a Synology NAS.
My point is, the VM as describe is only 700GB because it was allowed to grow to this size, but it could be a meager 150GB.
This is not correct - I guess there was a misunderstanding somewhere.
-
@Dashrender said:
@scottalanmiller said:
Where are the logs going now?
Into the SQL DB on the server.
same place where the EHR data lives.A developer could very quickly make a little component that takes those logs and outputs to a text file. I mean, realistically, you could do this with a one line script - just one SQL query going out to file. ELK will grab the file and boom, all done.
-
This server has a 60 GB SQL db, 500+ GB of TIF (scanned in paper documents) and another 100+ of application and other files associated with the old EHR.
At this point in time, the only thing changing on this system should be the access logs - who's logging in, who they are searching for, etc. The data in the DB and the TIF files, etc should all remain static.
The system (other than the log growth) should not be growing. It has around 50 GB of free space currently. This should be a lifetime of space since the main data isn't growing anymore.
-
So @Dashrender do you need the static data on the VM including everything that makes up the 700GB to function?
Or can all of the extra stuff get pushed off to something else?
If the goal is to ensure the VM boots, and the database is accessible, then you should reduce the size of the VM as much as possible.
Anything that is static and that can get moved out of it, I would imagine should be, so you could recover from a faulty OS update that much more quickly.
-
@scottalanmiller said:
@Dashrender said:
@scottalanmiller said:
Where are the logs going now?
Into the SQL DB on the server.
same place where the EHR data lives.A developer could very quickly make a little component that takes those logs and outputs to a text file. I mean, realistically, you could do this with a one line script - just one SQL query going out to file. ELK will grab the file and boom, all done.
I'm guessing that you're assuming that all of the logs are in a single table - and assuming that's true, then I agree with you.
-
@DustinB3403 said:
So @Dashrender do you need the static data on the VM including everything that makes up the 700GB to function?
yes - if anything on there is removed (or not mapped into it) the whole thing doesn't function as it should.
-
I should also add - 30 hours of downtime on this system would not be a huge deal.
-
@Dashrender said:
I should also add - 30 hours of downtime on this system would not be a huge deal.
If we have to go to a paper chart (yes we still have 10's of thousands of them in storage) it would take at least 24 hours to get it.. this "old" system is now in that ball park.
-
@Dashrender said:
I should also add - 30 hours of downtime on this system would not be a huge deal.
But again, that is assuming the import and your backup is in good working condition. If it fails it could be down for multiple days.
-
@DustinB3403 said:
@Dashrender said:
I should also add - 30 hours of downtime on this system would not be a huge deal.
But again, that is assuming the import and your backup is in good working condition. If it fails it could be down for multiple days.
and it would be down for multiple days if the data VM dies and doesn't restore correctly either.
-
But with the data you could have multiple known good copies, with the VM you have your individual backups.
Which all need to be tested on a regular basis to confirm they function. Which would at best take ~30 hours to test the import of.
-
@DustinB3403 said:
But with the data you could have multiple known good copies, with the VM you have your individual backups.
Which all need to be tested on a regular basis to confirm they function. Which would at best take ~30 hours to test the import of.
Multiple known good copies? huh? Why would I have multiple copies of that non changing data?
-
The very same reason you keep multiple copies of anything critical..... so you have another to recover from.
Even if all 700GB are in this VM, you don't keep just 1 backup of it.
-
@DustinB3403 said:
The very same reason you keep multiple copies of anything critical..... so you have another to recover from.
Even if all 700GB are in this VM, you don't keep just 1 backup of it.
You have a point here.
-
Dustin, you still haven't told me what makes my application VM more vulnerable than a Data SAMBA share or a NAS though to warrant splitting it.