As the title suggests, I'm at a client's where for some reason, transferring large files to their file server has slowed to a crawl. What typically happens is the transfer begins as normal with throughput around 350MBps to 400MBps (yes, their host is connected to Ubiquiti ES switch 10Gb port and primary video dev workstation is connected to second 10Gb port) but then after 1 or 2 seconds, the transfer speed drops to 0 for a couple minutes and when transfer resumes, it hovers between 500Kbps to 2MBps. The real kicker is that, when their IT guy transfers the same file from the workstation to the Hyper-V host instead of the file server VM, transfer just speeds along at 500MBps to 600MBps.
Host is running Windows Server 2012 R2 with Hyper-V role (their resident IT guy isn't comfortable without the GUI). The host and file server VM are both fully patched. Jumbo frames is enabled throughout including switch. The VM is running Windows Server 2012 R2 and its sole purpose is as a file server. Workstation is running Windows 10 Pro (fully updated). The file server VM has a 40TB data disk with >7TB of available space. The host has >15TB available space.
As a second test, tried to transfer file from the Hyper-V host that is hosting the file server VM to the file server VM and the transfer is the same garbage performance as from workstation to file server V.
When I run ping -t tests from the workstation to the host, replies are consistently <1ms or 1ms and 0% loss. Same ping -t tests to the file server VM result in <1ms to as high as 11ms with 0% loss.
It seems to me that the issue is with the VM itself but can't find anything wrong. Any suggestions as to where else I can look?