XenServer Export Performance Seems Poor
-
@olivier That looks to be a driver bug, not a network performance bug.
-
See the Gzip process sucking up all the CPUs in XenServer. I'm betting on far better perfs without it.
-
@olivier said:
As you can see, you are not the first: https://bugs.xenserver.org/browse/XSO-44
Import/Export speed is a nightmare
Disabling compression is a good first step to avoid GZIP in XenServer (which is known to be slow).
I was wondering if this was the problem. running a streaming GZIP during the copy process would definitely add overhead - but I would have expected to see the CPU or RAM being hit harder if that was the bottleneck.
-
@olivier said:
See the Gzip process sucking up all the CPUs in XenServer. I'm betting on far better perfs without it.
Doh - I somehow completely missed that GZIP was using 84% of the CPU.. thanks for pointing that out.
Can I assume that GZIP is single threaded, that's why the rest of the system is running fine, but this process is pretty slow? This is a Hexacore single processor system.
-
In any case Gzip is extrremly slow but give a good compress ratio. I already told XAPI guys to give a flag to use LZ4, which is much faster but a little bit less efficient on the compress ratio side.
Anyway, disable it and redo a test.
-
@olivier said:
In any case Gzip is extrremly slow but give a good compress ratio. I already told XAPI guys to give a flag to use LZ4, which is much faster but a little bit less efficient on the compress ratio side.
Anyway, disable it and redo a test.
I'm going to let this finish first. I'm at 25+ hours currently and 450 GB of 700 GB done.
What's odd, I left last night at 5 PM to 120 GB done (but my memory could be bad), arrived this morning at 400 GB done, and now 2 hours later it's up to 450 GB, it seems to have gone to sleep over night, or just got faster this morning.
-
it just finished downloading.
700 GB compressed down to 478 GB.
30 hours to download -
@Dashrender said:
it just finished downloading.
700 GB compressed down to 478 GB.
30 hours to downloadDoesn't seem horrible... it could've been worse, it could have almost completed and then resulted in "file corrupt"
-
But this doesn't answer why you have a 700GB VM, I'd move any data you can off of it and onto a protected network share.
This way you can restore the VM more rapidly rather than trying to import 700GB back into Xen.
-
@DustinB3403 said:
But this doesn't answer why you have a 700GB VM, I'd move any data you can off of it and onto a protected network share.
This way you can restore the VM more rapidly rather than trying to import 700GB back into Xen.
That data is what is important - so other than perhaps having multiple drives, and exporting them individually, I'm not what I gain?
So let's say I two VMs, one for the actual application, and another for the data. Sure, if the small application VM dies, I can restore that quickly, but what about when the data VM dies? Then I'm still left in a long haul restore process.
I suppose you might say - well, you could break the application VM via an update, which is much more likely than breaking the data VM. OK that's true. Then I could restore my application VM quickly, connect to my data and be back online faster.
But that means either buying another Windows server license or making my data accessible to Windows application server from a free Nix box, which for all intents and purposes should be possible.
-
@Dashrender said:
But that means either buying another Windows server license or making my data accessible to Windows application server from a free Nix box, which for all intents and purposes should be possible.
that's just setting up Samba for file sharing. Super standard.
-
@scottalanmiller said:
@Dashrender said:
But that means either buying another Windows server license or making my data accessible to Windows application server from a free Nix box, which for all intents and purposes should be possible.
that's just setting up Samba for file sharing. Super standard.
Yeah I know. And assuming I use all internal networking, I should be at near 1 Gb between the VMs.
-
I would say that putting the data onto a protected NAS or Samba Share which is appropriately backed up would provide a higher level of protection for your Production VM.
As I see it the main purpose of the VM (in terms of recovery) is "how quickly can I recover this VM".
If it's a 700GB VM you'd be there for 30 hours with the system down until it completed its import. (assuming nothing goes wrong).
So by moving as much data off of the VM, you're offering a better level of protection to the business if you need to recover the VM.
The data can easily be protected between a Samba Share and a Data storage provider.
-
Right - but you missed what I was saying.
What makes the application VM any more vulnerable than the NAS/SAMBA share? Nothing really. Hardware wise, the VM is probably better off than the NAS. And a SAMBA share should be inside a VM assuming it's not a NAS, so the SAMBA share is exactly the same as the application VM.
As I mentioned... the main thing that puts the application at a greater risk is application/OS updates, of which the SAMBA share VM would only have OS updates.
I'm seeing you trying to say it's better to not have all of your eggs in one basket - which Scott has shown definitely isn't always true.
As for I'm using a VM mainly because how quickly can I recover this VM yeah, there may be something to that.. but that's not the main reason for me. For me is easy of recover-ability, and portability - meaning I can stand the VM up on pretty much any hardware, easily because it's a VM, not a bare metal restore that will require drivers, etc.
-
Add to all that this is nearly a read only system. Sadly I can't truly make it a read only system, I don't have to worry about backups once I have a good working backup in place. If someone makes changes to it, I don't care about those changes.
-
@Dashrender said:
Add to all that this is nearly a read only system. Sadly I can't truly make it a read only system...
Why one and not the other? Meaning, why is it read only but you can't make it read only?
-
@scottalanmiller said:
@Dashrender said:
Add to all that this is nearly a read only system. Sadly I can't truly make it a read only system...
Why one and not the other? Meaning, why is it read only but you can't make it read only?
As it was explained to me, there aren't user permissions in the system that allow full reading without also allowing for some level of writing.
The vendor EOL'ed it in 2013 and we jumped off as close to the date as possible. There are no devs around for it that we have access to.
-
@Dashrender said:
As it was explained to me, there aren't user permissions in the system that allow full reading without also allowing for some level of writing.
So this is a database that the vendor does not control? How does such a weird situation arise?
-
If you have it running on a VM, surely you can make it read only from a highly level so that there is no need to write?
-
@scottalanmiller said:
@Dashrender said:
As it was explained to me, there aren't user permissions in the system that allow full reading without also allowing for some level of writing.
So this is a database that the vendor does not control? How does such a weird situation arise?
I don't know what you mean?
We bought a product called Clinician. It was bought/sold 3 times while we were using it. The last company bought it and killed it, of course in the hopes that we (and the rest) would just jump onto the new owner's main product - which we did not do.