Storage Question
-
@BRRABill said:
I thought of one more question:
A lot of the talk of the enterprise SSD and battery on PERC cards revolves around power loss.
But why is that an issue if the server (probably) has a UPS and shutdown?
What happens if the power supply in the server dies? You still want something to backup the cache on the controller card to cover this instance. The other option is a controller with flash memory instead of cache.
-
Yes, everything in the server is totally updated firmware-wise.
Pretty sure it's just an issue of it being a non-DELL drive. I read a lot of drives from other manufacturers were exhibiting the same symptoms.
I guess some SSDs work, and some don't. Kingston has been good with working with me, but this might not be fixable on their end.
I guess that would be ANOTHER question ... anyone using 3rd party SSDs that work with DELL servers?
-
@Dashrender said:
What happens if the power supply in the server dies? You still want something to backup the cache on the controller card to cover this instance.
Ah, good point. Possibly a reason they go with the H710 then! And enterprise-class SSDs with power protection.
-
@BRRABill said:
I thought RAID 5 was frowned upon these days?
On spinning rust (aka Winchester drives.) On SSD it is the norm.
-
@BRRABill said:
@Dashrender said:
What happens if the power supply in the server dies? You still want something to backup the cache on the controller card to cover this instance.
Ah, good point. Possibly a reason they go with the H710 then! And enterprise-class SSDs with power protection.
We've had a recent really deep dive on why enterprise class drives are a waste of money (search through the IT Discussions). Perhaps you could use a non Dell RAID controller to solve this problem?
-
@BRRABill said:
@Dashrender said:
What happens if the power supply in the server dies? You still want something to backup the cache on the controller card to cover this instance.
Ah, good point. Possibly a reason they go with the H710 then! And enterprise-class SSDs with power protection.
Yes, in general with servers a RAID controller is one of the first places that I recommend making a bigger investment. The larger cache and faster CPUs of better cards, plus other features, really make a difference. Obviously battery or flash backing of your data is a big deal.
-
@Dashrender said:
We've had a recent really deep dive on why enterprise class drives are a waste of money (search through the IT Discussions). Perhaps you could use a non Dell RAID controller to solve this problem?
LSI or Adaptec are the good choices. Once you go down that road, be sure to reconsider being on Dell hardware. At a minimum hit up @BradfromxByte to talk about refurbed gear.
-
This is the topic regarding Consumer SSDs vs Enterprise SSDs
-
@BRRABill said:
- Considering what I already have and our requirements, would it just make sense to buy a few more 7.2K drives and make a RAID 10 array out of them? Is there a huge performance difference between those two arrays? (7.2K vs. 10K both in a RAID 10.)
Do you know your IOPs requirement? You might be able to get away with four 7.2K drives in a RAID 10. Splitting into two RAID 1's as you're currently planning is actually the worst thing you can do. It ends up wasting the majority of the performance offered by the drives you install the OS on.
For this server you should install Hyper-V or Xen onto a SD card or USB stick, then run your VMs from the storage. If you have enough storage space with a RAID 1 SSD, that's probably good enough. If not, moving to a 4 drive RAID 5 on SSD would probably be the next place to look.
-
My very first thought here is.... are SSDs or even SAS drives worth it? Before we talk first party or third party drives, let's get some performance ideas under our belts. Twenty users is not very many. AD needs no IOPS at all. File servers tend to not use a lot, on average. Email even less.
SAS is fine here, but probably NL-SAS not 10K and very unlikely 15K. SATA will probably do the trick too. I wouldn't go cheap on 5400 RPM SATA or anything. But standard 7200 RPM SATA is probably just fine.
If we move to SATA drives we get cheaper and we can incorporate the two 500GB drives that already exist into a single OBR10 array for better value and performance. Eight 500GB SATA drives isn't too shabby nor too expensive.
-
@scottalanmiller said:
@Dashrender said:
We've had a recent really deep dive on why enterprise class drives are a waste of money (search through the IT Discussions). Perhaps you could use a non Dell RAID controller to solve this problem?
LSI or Adaptec are the good choices. Once you go down that road, be sure to reconsider being on Dell hardware. At a minimum hit up @BradfromxByte to talk about refurbed gear.
I only mention using a non Dell card to get rid of the errors vs moving to Dell's expensive supported drives.
-
@scottalanmiller said:
If we move to SATA drives we get cheaper and we can incorporate the two 500GB drives that already exist into a single OBR10 array for better value and performance. Eight 500GB SATA drives isn't too shabby nor too expensive.
This is what I did for my first VM host, and it's pretty darned decent.
-
@scottalanmiller said:
My very first thought here is.... are SSDs or even SAS drives worth it? Before we talk first party or third party drives, let's get some performance ideas under our belts. Twenty users is not very many. AD needs no IOPS at all. File servers tend to not use a lot, on average. Email even less.
SAS is fine here, but probably NL-SAS not 10K and very unlikely 15K. SATA will probably do the trick too. I wouldn't go cheap on 5400 RPM SATA or anything. But standard 7200 RPM SATA is probably just fine.
If we move to SATA drives we get cheaper and we can incorporate the two 500GB drives that already exist into a single OBR10 array for better value and performance. Eight 500GB SATA drives isn't too shabby nor too expensive.
I agree in general, but it really depends on his needs - if space isn't required in bulk then nuts to the rust, go SSD
-
Space is definitely not a problem. The SSDs I have are fine for space.
So does it seem we are leaning towards keeping the SSDs and throwing warnings to the wind?
As an afterthought, I thought about not getting the PERC controller and a straight LSI/Adaptec. But then wouldn't that cause the same problems, where the DELL server itself doesn't like the drive, hence flashing the amber? Or is the hotplug cage lights controlled by the adapter itself?
-
Our mailserver is currently sitting at around 110GB. The data server at 130. That's including data and OS (Server 2003). (I'll be buying a second server in a few weeks once I get this all worked out. I didn't want to buy them both in case any issues came up. And did they!! )
So
server 1 = dc/data
server 2 = dc/mailThe three Kingston SSDs are 480GB capacity. So there would be more than enough space.
-
@BRRABill said:
Our mailserver is currently sitting at around 110GB. The data server at 130. That's including data and OS (Server 2003). (I'll be buying a second server in a few weeks once I get this all worked out. I didn't want to buy them both in case any issues came up. And did they!! )
So
server 1 = dc/data
server 2 = dc/mailThe three Kingston SSDs are 480GB capacity. So there would be more than enough space.
No!
2 Servers Hypervisor Host.
Virtualize every server you have, and run everything between the two host.
Virtualize Everything!
-
Sorry to keep replying, but everyone is posting so quick I don't want anyone to miss anything.
The 480GB Kingston SSDs are under $250 each including cage and 3.5" adapter. So cost isn't really a concern there either.
The KC300 is a pro drive (similar to the Samsung 850) that is tuned slightly for server use. It's not enterprise-grade, but my Kingston rep (who, with the rest of Kingston have been GREAT) thinks it will be fine for my usage.
-
@DustinB3403 said:
Virtualize every server you have, and run everything between the two host.
Virtualize Everything!
That is what we are going to do.
Each server will run 2 VMs.
machine 1 VM1 = DC
machine 1 VM 2 = data
machine 2 vm1 = DC
machine 2 vm2 = mailI wanted to have 2 servers for DC redundancy.
-
The way you explained it above seemed as if you were running them on bare metal, without a hypervisor.
Sorry.
-
@BRRABill said:
@DustinB3403 said:
Virtualize every server you have, and run everything between the two host.
Virtualize Everything!
That is what we are going to do.
Each server will run 2 VMs.
machine 1 VM1 = DC
machine 1 VM 2 = data
machine 2 vm1 = DC
machine 2 vm2 = mailI wanted to have 2 servers for DC redundancy.
Why are you splitting these loads over 2 servers? That can all easily run on a single server.