Software RAID10 Slow Read
-
I have a NAS that is running an Intel Core i3 with 8 cores, 16G RAM, and a 16Tb R10 (mdadm) with 4 x 8Tb Seagate Barracuda disks. OS is Fedora Server 34 minimal. Only used as an NFS store.
I noticed some VERY slow speeds when reading from the server over 10Gb network. I copied an mkv file to the server as a test. Doing an rsync from the raid array to a separate F34 server with only and an SSD disk, the rsync was transferring at 3MB/s. Estimated transfer time was approx 5 hours.
I moved the same file from the raid disk to an internal SSD and the rsync completed in 57 seconds to a 27G file. Transfer rate was 465MB/s. I realize the SSD is faster, but what caused the R10 disk to suddenly have very poor performance?
I ran SMART tests on the disks and they all passed. MDSTAT shows the array in good health.
Any ideas on what could have changed? I am running out of options. All the testing is looking good except for transfer speed.
-
@brandon220 said in Software RAID10 Slow Read:
I noticed some VERY slow speeds when reading from the server over 10Gb network.
Test locally, not over a network. If you don't see the issue locally then it can't be RAID related.
-
@brandon220 said in Software RAID10 Slow Read:
I realize the SSD is faster, but what caused the R10 disk to suddenly have very poor performance?
Nothing, that cannot be the bottleneck. You provided the test to show that it was fast.
-
@brandon220 said in Software RAID10 Slow Read:
Any ideas on what could have changed?
Unless I'm missing everything... the difference is the networking.
-
My first thought is the network here as well.
You're seeing 3MB/sec that seems awful low.
I would try transferring that same 27GB file from several different workstations over the network to that same 4x8TB Array (not all at once, of course).
-
@scottalanmiller I did the same test over the network. With the same file. 27G mkv file. From the raid disk it would start out at 400 MB/s and then go to 120kB/s
From the same server and the same file over the network, it sustained 465 MB/s from an internal SSD instead of the raid array.
The network is SFP+ from box to box. No other traffic. Copying the file internally in the server from raid to SSD was approximately 250 MB/s. What would cause the network to copy the same file but slow to a crawl? It gets more confusing the longer I look at it.
-
@dafyre During my rsync it dropped to 101kB/s. Very strange. No other network traffic.