Intel I350 base-T BONDED PAIR throughput SUCKS!
-
So posting here on behalf of an SW user (topic here) who's getting horrid throughput when he has two nics bonded on his HyperV 2012 server.
Any ideas?
-
What network switch is he using?
-
I'm not certain, just posting here to try and get more eyes on his issue.
The throughput jumps as soon as the nics unbonded.
-
Disable VMQ on the NIC devices if that option is available.
-
@dafyre said:
Disable VMQ on the NIC devices if that option is available.
This is the biggest thing I have found.
VMQ is enabled by default and likely his hardware does not support it.
-
I've gotten to where I disabled VMQ and *Offload on any network card that is in my system these days... Especially if I am going to use Hyper-V... 99% of the times I've seen either (or both) used, it has caused some really awful performance!
-
anybody know what sort of blip i'll see if i disable vmq in a live environment? a second or two? would end users notice?
-
I doubt that they would notice, but it's possible.
-
lol yeah, that's the consensus. meh, we dont know haha
-
Not sure I've ever heard of someone trying it. Maybe just do it during lunch? What kind of workload(s) is it?
-
sql tier to EMR and a dc
-
Anything external talking to SQL Server?
-
DC will definitely be fine.
-
all of the workstations that access EMR (60 or so) query sql
-
It has been over a year since I did this on a running system. I do not recall any big issue.
Performance was so crappy though that even a quick blip would have just be ignored.
-
I like Scott's idea, warn the management at least that you are doing it, and do it over lunch if they approve, otherwise schedule for tonight.
-
they havent been whining about performance today so i'm just gonna wait til tonight