SAS Drives RPMs
-
@Pete-S said in SAS Drives RPMs:
Maybe they meant by fragmentation?
As they are used and data becomes more and more fragmented, they also becomes slower as the head has to move all over the place.
He did mention that, but as a separate issue.
-
But if we are talking servers they are likely running some kind of hypervisor. Does the hypervisor run defrag?
Yeah it's a our hypervisor cluster. I can't find any mention of defragging in the documentation, but their documentation is a little scant on the real nitty gritty side.
-
@Markferron said in SAS Drives RPMs:
But if we are talking servers they are likely running some kind of hypervisor. Does the hypervisor run defrag?
Yeah it's a our hypervisor cluster. I can't find any mention of defragging in the documentation, but their documentation is a little scant on the real nitty gritty side.
The Scale Cluster?
-
@dafyre Yup
-
@Pete-S said in SAS Drives RPMs:
@dafyre said in SAS Drives RPMs:
@Pete-S said in SAS Drives RPMs:
Maybe they meant by fragmentation?
As they are used and data becomes more and more fragmented, they also becomes slower as the head has to move all over the place.
That's why Windows OSes tend to run defrag in the background now.
But if we are talking servers they are likely running some kind of hypervisor. Does the hypervisor run defrag?
I've wondered the same thing? And if the HV is running defrag, and Windows Server is running defrag, is there extra work being done here?
-
Not so sure about wear and tear since the components are built to last short of a nuclear EMP.
Fragmentation was always the biggest problem we had since I can remember. Even large arrays would experience degradation over time due to seek times.
We would set up our single host partition for the virtual machines then configure all virtual machines with fixed VHDX files for their operating system "partition" then a fixed VHDX for their data so long as the size was around 250GB or less. Then, for the big one we'd use a dynamically expanding VHDX file. This kept everything nice and contiguous.
For clusters we'd set up dedicated LUNs for each component of the above in smaller settings for each virtual machine. In larger settings we'd set up a LUN for the operating systems, smaller data VHDX files, and a few for the big ones. The smaller VHDX files would still be fixed while the large ones would be dynamic but having their own LUN to grow in limited the fragmentation problem.
All-Flash pretty much renders the whole conversation moot. We're getting to the point where the only place we deploy rust is in 60-bay and 102-bay shared SAS JBODs for archival or backup storage on clustered ReFS repositories.
EDIT: FYI: Fixed VHDX creation on a ReFS Clustered Shared Volume is virtually instantaneous no matter the file size.
-
@Markferron said in SAS Drives RPMs:
But if we are talking servers they are likely running some kind of hypervisor. Does the hypervisor run defrag?
Yeah it's a our hypervisor cluster. I can't find any mention of defragging in the documentation, but their documentation is a little scant on the real nitty gritty side.
You don't really worry about fragmentation on RAIN. In modern systems, fragmentation does not work the same and it's not something you really contemplate. When needed your storage layer does it, when not needed, it doesn't. Many systems have no reason to worry about fragmentation because of multi-access patterns.
-
SAS drives definitely do not slow down. Their rotational speed isn't flexible, it is highly precise. The servo keeps it spinning at exactly the same speed .... until it doesn't work any more.
-
@scottalanmiller Thanks, figured as much.
-
@Markferron said in SAS Drives RPMs:
@scottalanmiller Thanks, figured as much.
Things like fragmentation are real, and will slow the "storage subsystem" in most cases. But that's not the same as the drive slowing. The drive itself works at a predictable speed that only varies when a block cannot be read and the drive has to try again. But even that speed is predictable. So the mechanical speed of the drive never varies (over time), but the throughput of data pulled from the drive can vary based on the rate of magnetic failure. But once that has any real effect, the drive is toast anyway.