LAN speed
-
@IT-ADMIN said:
the front side of the NAS itself has LED above each HD, i thought you are talking about them
Those LED are status indicators of the drives them selves, not the performance of the NIC connected to the back of the NAS.
-
@DustinB3403 said:
Can you go into the network page on the NAS, and screen shot that for us?
I have a feeling the speed within the NAS is set to Auto. Which if it is, is likely causing the performance of the NIC to be slow. So either there is something wrong with the configuration on your Switch. Or the NAS.
yes you are right, the link speed is set to auto, should i change it to 1000 ??
-
-
Yes, but be warned.
If for some reason that the speed isn't supported on your switch, you'll likely be unable to manage the NAS without performing a factory reset on it.
It should be supported, but it sounds as if the NAS is connected at 100MB.
Confirm that you can connect to the NAS directly via direct Ethernet before making the switch to 1000MB/s in case you must change it back.
-
thank you for your advice, but the cisco switch (catalyst 3560) is 1 Gb/s, this is for sure
-
It's been a while since I've seen auto not work as advertised,
So I ask - is it still best practice to manually set the switch and server manually to the desired speed and duplex?
-
so what do you think guys ?? should i change it to 1000 ??
-
@Dashrender It's never a best practice, but to pinpoint causes of trouble sometimes you must.
@IT-ADMIN can you confirm that you can access the NAS with the second port or a cross-over cable. We don't want you to have to perform a factory reset unnecessarily.
-
did the factory reset delete the data ?? or just reset the setting ??
-
That generally will delete everything, including the RAID configuration.
Hence, "Factory Reset".
-
@coliver said:
Oh man... it looks like Solid blue indicates the port is running at 10Mb/s (If I'm reading the right documentation) if the duplex LED is also blue then you are running at half duplex.
Can't be, he's getting faster than that now.
5.5MB/s is 44Mb/s.
-
@IT-ADMIN said:
@DustinB3403 said:
Can you go into the network page on the NAS, and screen shot that for us?
I have a feeling the speed within the NAS is set to Auto. Which if it is, is likely causing the performance of the NIC to be slow. So either there is something wrong with the configuration on your Switch. Or the NAS.
yes you are right, the link speed is set to auto, should i change it to 1000 ??
No, it is working perfectly now. GigE requires Auto. Setting it to 1000 is an unofficial mode only supported by a few vendors who don't follow the specs.
-
@IT-ADMIN said:
so what do you think guys ?? should i change it to 1000 ??
No, the network is clearly not an issue here. What started us even looking at that? The speed that you are getting is clearly not being capped by an Ethernet speed detection. This cannot be where your issue is.
-
@DustinB3403 said:
@Dashrender It's never a best practice, but to pinpoint causes of trouble sometimes you must.
We've eliminated that as a possible concern already.
-
Okay, moving on from the NIC which is a red herring, let's talk about where the issues CAN be....
They could be...
- That the NAS cannot go faster than this. NICs do not determine the speed that can be achieved, the device does. What is the setup of the device, the protocols, the actions that are being measured at 5.5MB/s (aka 44Mb/s.)
- The network is saturated causing it to slow down to this speed. The NIC we know for certain is much faster than the speed that you are getting in the transfer. So if the network is the issue, it is from a bottleneck along the path. This is extremely unlikely as LAN bottlenecks in the switched networking world for SMBs pretty much don't exist until you start adding VLANs and getting silly.
- The device receiving the files cannot receive them any faster than this. The bottleneck can be either end.
-
ok, i think i will keep it as it is "auto"
-
So we need to figure out if the speed issue exists from the sending point, the receiving point or along the path.
-
You would not expect to be getting 1Gb/s in any way from even a perfect NAS able to stream from memory. With Ethernet, TCP/IP and iSCSI overhead alone you'd normally max out around 800Mb/s and with NFS or SMB a little lower than that. That's if perfect.
Typically storage is measured in IOPS, not in through (bandwidth) because this is what matters nearly all of the time. Throughput does matter, but isn't what generally impacts us. You almost never see storage requirements written in anything but IOPS. This is because of how storage happens... the delays and performance issues almost always come from an inability to "do different things" rather than to stream data on or off. If you plan to use this as a streaming media server, that could be different, but for normal use, throughput isn't a real concern.
-
In order to measure throughput in any real way you need a single, extremely large file that will transfer for twenty minutes or so and a receiving unit that has "unlimited" ability to accept the file (preferably into memory, not disk) and a network that you know has no saturation (this could be a quiet switch, a switch with nothing but this connection or a crossover cable.) Eliminate the "noise" and things that cause interruptions so that you can, as clearly as possible, look only at the throughput.
-
Scott - what tool would you use to create a 120 GB file to keep a 1 Gb link saturated for 20 mins (assuming 800 Mb/s transfer)?