What's the first thing you do when you get a new laptop or system?
-
@creayt said:
- By removing say, one of the applications from this channel ( adapter 1 ) and moving it to its own channel ( adapter 2 ), it guarantees that at any given moment the new network adapter isn't spending its clock cycles handling other requests, receiving new ones from other applications, or having to mix in requests from the removed application into the packet batches.
- This has an effect similar to splitting a specific task off onto a dedicated second core in CPU processing, although the primary core has more than enough horsepower and capacity to handle all of the work of the computer, by isolating a task onto a separate core, the task switching penalties and "contention" for focus are removed, and the 2nd core will respond, even if only marginally or trivially, more quickly to any incoming tasks dedicated to it because its resources are guaranteed to not be doing anything else, even if we say that none of the other work at play will or should matter because the hardware can more than easily enough handle all of it.
This is where the analogy breaks down. This is not like a CPU. A CPU has cache, NICs are not caching the workload for repeatable code. The idea of horsepower between the two is quite different as well. The NIC doesn't saturate its processor, so the idea of capacity on the processing is pointless. The bottleneck is always the network, this is completely different than a CPU where having "spare cycles" buys other processes more speed. The NIC doesn't have this effect. So you aren't getting anything here.
The only bottleneck you are removing is a tiny bit of line wait, and if that is affecting you, like I said before, this is a horrible bandaid adding real latency all the time to resolve imagined latency some of the time. If you have line wait, get another NIC and bond them. Or move to 10GigE.
You are going after performance problems that don't exist and introducing real network latency in doing so.
-
@Dashrender said:
I ask because I don't know - is the processing of things into packets done at the NIC layer, or is it done in software and the CPU before being sent to the NIC to put it on the line?
I know that some advanced NICs, like those used for SANs, do offload some of the processing from the system CPU, but I'm not sure if that's the case in a normal PC/laptop.
Packets is done on the NIC normally. But uses no real overhead.
-
@Dashrender said:
I ask because I don't know - is the processing of things into packets done at the NIC layer, or is it done in software and the CPU before being sent to the NIC to put it on the line?
I know that some advanced NICs, like those used for SANs, do offload some of the processing from the system CPU, but I'm not sure if that's the case in a normal PC/laptop.
I think this is a key part of what's missing in how I see things too. It's my impression that NICs do a ton of processing internally which is why one of the main things that differentiates their performance/speed/price is the speed of their internal processor. For example the "Killer" brand NIC in the laptop I just returned touts its 400 MHz processor, which as far as my current understanding and w/ the steps I outlined would mean that, say, compared to a NIC w/ a 200 MHz processor it'd chip away at latency during all moments that it's translating requests into packets and deserializing requests from packets by doing so ~ twice as fast. Now that's clearly just one part of the latency contribution, but it's still a part where a faster processor, or offloading a task to second processor in the case of using both a wired and wireless adapter ( two processors instead of one ), could theoretically reduce the end latency value.
-
@scottalanmiller said:
Nearly all of that task is done by the OS, not the NIC. Even with TCP Offload enabled (it rarely is) there is a lot done by the OS.
Ah, ok, well that certainly changes things. What purpose does a faster internal NIC processor serve? Here's a quote from a Tom's Hardware review for reference:
@review said:You may recall that the Killer NIC derived its strength from a few key enhancements over regular integrated network controllers. First and foremost, the adapter used an on-board 400 MHz processor to handle all network packet processing. This offloaded traffic from the host CPU and side-stepped the Windows networking stack. Killer actually had a Linux distribution on the card, turning it into a sort of PCI Express-based co-computer.
-
@Dashrender said:
Scott also mentioned that the WiFi connection will suffer the additional inefficiencies inherent of WiFi - latency and a contention based network. Now, if the LAN port is saturated, and you'd see actual gain from splitting of traffic over two network connections, then you can overcome these inefficiencies, but that doesn't seem to be the case.
Right, which I mentioned earlier - the only case in which this would be beneficial is when you are bandaiding a saturated Ethernet connection in which case you need to do something a lot better than this. In all other cases, this is a negative.
The WiFi connection adds huge latency and network risk compared to the Ethernet connection. If adding a second NIC was a big deal, I could see doing this maybe in a really rare case. But we are talking about a trivial hardware update to go to 2 - 4 Ethernet ports.
-
@creayt said:
I think this is a key part of what's missing in how I see things too. It's my impression that NICs do a ton of processing internally which is why one of the main things that differentiates their performance/speed/price is the speed of their internal processor. For example the "Killer" brand NIC in the laptop I just returned touts its 400 MHz processor, which as far as my current understanding and w/ the steps I outlined would mean that, say, compared to a NIC w/ a 200 MHz processor it'd chip away at latency during all moments that it's translating requests into packets and deserializing requests from packets by doing so ~ twice as fast.
That's why I mentioned the fact that the NICs are generally at wire speed. Once at wire speed, there is no "faster" no matter what you do. The biggest benefits of extra processing power is not in making the NIC faster, often this makes it slower (losing wire speed) but taking a load off of the CPU itself.
-
Remember that RAID cards are slower when you go to hardware RAID compared to software RAID. But we use them because it offloads processing from the main CPU and because of convenience. But we never do it for speed.
-
@scottalanmiller said:
Remember that RAID cards are slower when you go to hardware RAID compared to software RAID. But we use them because it offloads processing from the main CPU and because of convenience. But we never do it for speed.
Why are hardware RAIDs slower than software? One uses the (I hope) specially designed processor for this task, the other uses the CPU.
-
@Dashrender said:
Why are hardware RAIDs slower than software? One uses the (I hope) specially designed processor for this task, the other uses the CPU.
Because the central CPU is just SO much faster. Even a specially designed $50 processor can't keep up with that $800 Xeon that is powering the main system.
Software RAID became almost universally faster around 2001 when the Pentium III became the standard entry point server processor.
-
So why haven't we moved to that solution on Intel based systems? Would we see so little gain? Or would this require a fundamental change for the system board makers to make hot swappable plugs? OR are the big vendors holding us back because of the prices they get to charge us for RAID cards?
If Software really is faster - why not go that way unless there are other things holding us back that either make it more expensive or impossible to do?
-
@Dashrender said:
So why haven't we moved to that solution on Intel based systems? Would we see so little gain?
Because, like nearly everything in SMB IT, performance is not a key issue. If we were concerned about performance as the primary factor we would not be using AMD64 processors at all, we'd run nothing but UNIX, on software RAID, etc.
We run Windows, AMD64 chips and hardware RAID because they are easy, convenient and protect us. There is almost no major decision made in SMB IT (or even enterprise IT) where performance is the driving factor. A secondary or tertiary one maybe, but not a driving one.
Software RAID is the only option on big iron servers and always has been. Hardware RAID only exists because of deficiencies in how the SMB world handles software RAID (Windows SR is terrible, VMware doesn't have it, etc.)
-
@Dashrender said:
Would we see so little gain?
Extremely little. The only place you'd really see it is on RAID 6 and 7 systems, RAID 7 is software RAID only already so that point is moot.
-
@Dashrender said:
Or would this require a fundamental change for the system board makers to make hot swappable plugs? OR are the big vendors holding us back because of the prices they get to charge us for RAID cards?
They are all hot swappable already and have been for as long as I've been aware. You can go to MDADM, Windows SR or ZFS today and you have had hot swap since the 1990s at least.
-
@Dashrender said:
If Software really is faster - why not go that way unless there are other things holding us back that either make it more expensive or impossible to do?
Because outside of the most extreme cases, speed just isn't that important. And when it is, the truly high speed systems like FusionIO can't use hardware RAID anyway.
-
Same reasons that we don't tune our filesystems for the absolute fastest performance. NTFS isn't the fastest FS out there, but it is fast enough. The differences just are not that important 99.999% of the time.
-
@scottalanmiller said:
Because, like nearly everything in SMB IT, performance is not a key issue.
That right there is the god damn truth. It shocked me for a second to see it in black and white but, damn it, it's true.
-
OK so speed isn't a driver, but cost often is - wouldn't our systems be less expensive if we dumped the RAID controller? or because Windows is so bad at SR the cost of the controller is worth while?
-
@Dashrender said:
or because Windows is so bad at SR the cost of the controller is worth while?
Well if the point is to protect your data ......
-
@Dashrender said:
OK so speed isn't a driver, but cost often is - wouldn't our systems be less expensive if we dumped the RAID controller?
Cost isn't a primary driver either or, again, we wouldn't be using Windows, right? Windows is like hardware RAID.... pay more, get less.... except it comes with some "ease of use" features that tend to pay off.
Hardware RAID is super simple when you need to deal with separation of duties or blind swap (datacenter swapping without system admin interaction.) Hardware RAID is "idiot proof" allowing IT pros who don't know how their systems work or don't even know what is running there to do drive swaps based on blinking lights alone. In fact, it makes it so easy, that drive replacement is no longer an IT task but a bench task. No computer knowledge needed. See a yellow light, replace with a matching part. Don't even need to know that it's a computer you are working on.
-
@scottalanmiller said:
@Dashrender said:
or because Windows is so bad at SR the cost of the controller is worth while?
Well if the point is to protect your data ......
So that's it - Windows is so bad at SR our data is safer in hardware RAID... I wonder why MS doesn't fix this? Wouldn't customers end up better off? I'm guessing the effort just wouldn't pay off for them?