Configuration for Open Source Operating systems with the SAM-SD Approach
-
@GotiTServicesInc said:
....although the rebuild time should be faster as long as you don't have a failure in both JBOD arrays on a single server?
There is no JBOD. You wouldn't have that in any situation.
-
What is it that folks like HP, NetApp, EMC, et al. do? Do they do it at the block level or via some type of other method like DRBD?
-
@GotiTServicesInc said:
So really the only correct solution is a RAID 10 ....? I still feel like you'd be stuck with a whole lot of rebuild time if a drive failed in one of the arrays
A drive failure on RAID 10 is always the same no matter how big the array is. A drive resilver on RAID 10 is always, without exception, just a RAID 1 pair doing a block by block copy from one drive to its pair. That's it. So if you were using 4TB drives like our example, the rebuild time is the time that it takes to copy one drive to another directly. That's all. It's the smallest rebuild time possible for any RAID system. You really can't make it faster than that.
-
@dafyre said:
What is it that folks like HP, NetApp, EMC, et al. do? Do they do it at the block level or via some type of other method like DRBD?
Always something like DRBD. Normally it is proprietary, but not always. NetApp does not have a stable, working replication last I heard. Some vendors actually just use DRBD (Synology, for example.) Some make their own replication products. But the general theory is the same - write to both at the same time, read from the local.
-
@Dashrender said:
But in a 20 drive array, you have 10 RAID 1 drives that are RAIDed 0 over them. So a single drive failure only requires the mirroring of a single drive.
Correct. Which doesn't mean that recovery is seconds or anything like that. But we measure the recovery time in hours with minimal impact rather than days or weeks with massive impact.
-
If you move a RAID 10 array from 4TB drives to 2TB drives you essentially cut drive recovering time in half, too. So you can balance things depending on your needs.
-
Handy thing to think about.....
Parity RAID: Drive resilver time is determined by a combination of drive size and array size.
Mirrored RAID: Drive resilver time is determined only by drive size. -
Thank you for all the information so far and I hope I'm not sucking you dry for information (I've read all the links you've posted already).
so can't DRBD use FC for the replication? or does it have to use LAN? and if we're forced to do Lan, we should be able to trunk some ports together to get higher throughput no?
-
@GotiTServicesInc said:
so can't DRBD use FC for the replication? or does it have to use LAN?
Well you can build a LAN on FC if you want
But DRBD can't talk SCSI so you can't use FC like you are thinking. DRBD isn't something that leverages other protocols, it is its own protocol natively that talks to DRBD on the other side. You don't add more protocols to it, it just talks over the network to itself.
DRBD is NOT a SAN, it's just replication.
-
@GotiTServicesInc said:
and if we're forced to do Lan, we should be able to trunk some ports together to get higher throughput no?
Yes, you would likely use 10GigE connections too, since no switch is required.