Dell PowerEdge C2100 with 24 Drive bays
-
Yes, 24x 2.5" and no hardware RAID. Remember that Dell C systems are not for SMBs but for huge enterprise clusters, just like HP DL1xx series. These are designed to be throw-away nodes, not stand alone enterprise servers. What is your use case for looking at something other than the PowerEdge R series?
-
Backup storage for the virtualization project you're aware of.
Just calculating what I might need to put something in a colo if the conversation comes up.
All storage space included (C: Drives and shares) Looking at 5841 GB.
To do full backups using NAUBackup (weekly or 4 times a month) we'd need 23364 GB of storage.
-
The 5841 GB is "used" space.
Space that's on the each drive regardless if it's 100% used. (none are) But just trying to do the math on this.
-
@DustinB3403 said:
Just calculating what I might need to put something in a colo if the conversation comes up.
Exactly where "throw away" servers are a horrible fit. You don't want equipment designed to be replaced, rather than replaired, in a colo where the cost to get gear in and out is high.
-
You already know that the 2 TB drives are going to cost you $400 ea at that size. Sure this is the way you want to go?
If you move to 3.5" drives you can move up to 6 TB drives. Assuming you can do consumer drives, you're looking at approx $200 a drive for 3 times the storage.
-
@DustinB3403 said:
Backup storage for the virtualization project you're aware of.
Why use a "disposable" server with high cost enterprise drives instead of an enterprise server with consumer SATA drives? LFF SATA is so much cheaper per GB, perfect for backup systems.
-
Just spitballing the idea's and it was the first device I came across. 3.5 SATA would work as well.
Should I be more concerned about URE's (etc) on consumer SATA's at this sort of setup?
-
This would only be for off-host backup, but written to weekly if my plan is decided on.
-
Not including the incrementals which are written ever hour, stored for 72 hours and then dumped.
-
@DustinB3403 said:
Should I be more concerned about URE's (etc) on consumer SATA's at this sort of setup?
Depends on the RAID level that you decide to use.
-
Spinning rust, RAID 10 of course.
-
@DustinB3403 said:
Just spitballing the idea's and it was the first device I came across. 3.5 SATA would work as well.
But it is not a viable device, so any information about it is misleading. Only use viable devices, even when spitballing.
-
-
@scottalanmiller said:
@DustinB3403 said:
Just spitballing the idea's and it was the first device I came across. 3.5 SATA would work as well.
But it is not a viable device, so any information about it is misleading. Only use viable devices, even when spitballing.
What makes it non via? I'm assuming you can add a RAID controller?
-
@Dashrender said:
What makes it non via? I'm assuming you can add a RAID controller?
Everything about a C series is designed to be disposable. Everything. Non-redundant parts, cheaper parts. This is literally a disposable node design, like a BackBlaze POD. This is designed exclusively for situations where you have many redundant nodes and you don't care if one or two just die on you.
-
Cheap for a reason. The C stands for Cluster.
-
@scottalanmiller said:
Cheap for a reason. The C stands for Cluster.
As in Cluster F*** I'm guessing then.
-
Ha ha, no not really, but that is a great way to think about it.
-
So really my only choice would be something like a R720XD.
Loaded with 12 6TB SATA drives in RAID 10.
-
@DustinB3403 said:
So really my only choice would be something like a R720XD.
Loaded with 12 6TB SATA drives in RAID 10.
Would you need RAID 10 for this? Maybe RAID 6 would work for this use case?