Storage Question
-
Space is definitely not a problem. The SSDs I have are fine for space.
So does it seem we are leaning towards keeping the SSDs and throwing warnings to the wind?
As an afterthought, I thought about not getting the PERC controller and a straight LSI/Adaptec. But then wouldn't that cause the same problems, where the DELL server itself doesn't like the drive, hence flashing the amber? Or is the hotplug cage lights controlled by the adapter itself?
-
Our mailserver is currently sitting at around 110GB. The data server at 130. That's including data and OS (Server 2003). (I'll be buying a second server in a few weeks once I get this all worked out. I didn't want to buy them both in case any issues came up. And did they!! )
So
server 1 = dc/data
server 2 = dc/mailThe three Kingston SSDs are 480GB capacity. So there would be more than enough space.
-
@BRRABill said:
Our mailserver is currently sitting at around 110GB. The data server at 130. That's including data and OS (Server 2003). (I'll be buying a second server in a few weeks once I get this all worked out. I didn't want to buy them both in case any issues came up. And did they!! )
So
server 1 = dc/data
server 2 = dc/mailThe three Kingston SSDs are 480GB capacity. So there would be more than enough space.
No!
2 Servers Hypervisor Host.
Virtualize every server you have, and run everything between the two host.
Virtualize Everything!
-
Sorry to keep replying, but everyone is posting so quick I don't want anyone to miss anything.
The 480GB Kingston SSDs are under $250 each including cage and 3.5" adapter. So cost isn't really a concern there either.
The KC300 is a pro drive (similar to the Samsung 850) that is tuned slightly for server use. It's not enterprise-grade, but my Kingston rep (who, with the rest of Kingston have been GREAT) thinks it will be fine for my usage.
-
@DustinB3403 said:
Virtualize every server you have, and run everything between the two host.
Virtualize Everything!
That is what we are going to do.
Each server will run 2 VMs.
machine 1 VM1 = DC
machine 1 VM 2 = data
machine 2 vm1 = DC
machine 2 vm2 = mailI wanted to have 2 servers for DC redundancy.
-
The way you explained it above seemed as if you were running them on bare metal, without a hypervisor.
Sorry.
-
@BRRABill said:
@DustinB3403 said:
Virtualize every server you have, and run everything between the two host.
Virtualize Everything!
That is what we are going to do.
Each server will run 2 VMs.
machine 1 VM1 = DC
machine 1 VM 2 = data
machine 2 vm1 = DC
machine 2 vm2 = mailI wanted to have 2 servers for DC redundancy.
Why are you splitting these loads over 2 servers? That can all easily run on a single server.
-
@MattSpeller said:
@scottalanmiller said:
My very first thought here is.... are SSDs or even SAS drives worth it? Before we talk first party or third party drives, let's get some performance ideas under our belts. Twenty users is not very many. AD needs no IOPS at all. File servers tend to not use a lot, on average. Email even less.
SAS is fine here, but probably NL-SAS not 10K and very unlikely 15K. SATA will probably do the trick too. I wouldn't go cheap on 5400 RPM SATA or anything. But standard 7200 RPM SATA is probably just fine.
If we move to SATA drives we get cheaper and we can incorporate the two 500GB drives that already exist into a single OBR10 array for better value and performance. Eight 500GB SATA drives isn't too shabby nor too expensive.
I agree in general, but it really depends on his needs - if space isn't required in bulk then nuts to the rust, go SSD
Of course, but if the main use is as a file server, likely there is some amount of capacity needs. Just guessing.
-
@BRRABill said:
As an afterthought, I thought about not getting the PERC controller and a straight LSI/Adaptec. But then wouldn't that cause the same problems, where the DELL server itself doesn't like the drive, hence flashing the amber? Or is the hotplug cage lights controlled by the adapter itself?
No, this would not carry through the issue. The issue that you have is that the drives you have now are hidden behind a proprietary controller that cannot talk to the drives. Because of this, the OS or hypervisor cannot talk to the RAID controller to find out the status of the drives.
Going to an LSI controller, as an example, that can get SMART errors off of the drives means that now the information is presented to the OS. The server itself is not proprietary and does not block the OS from talking to peripherals.
-
@BRRABill said:
Our mailserver is currently sitting at around 110GB. The data server at 130. That's including data and OS (Server 2003). (I'll be buying a second server in a few weeks once I get this all worked out. I didn't want to buy them both in case any issues came up. And did they!! )
So
server 1 = dc/data
server 2 = dc/mailThe three Kingston SSDs are 480GB capacity. So there would be more than enough space.
So what I am hearing is....
Dump the RAID 1 array completely. Move to an LSI controller. Install everything to the RAID 5 SSD array. Fast, easy, done.
Why a second server? Is that really warranted?
-
@BRRABill said:
I wanted to have 2 servers for DC redundancy.
Now the obligatory you never want redundancy, only reliability link.
-
@scottalanmiller said:
@BRRABill said:
Our mailserver is currently sitting at around 110GB. The data server at 130. That's including data and OS (Server 2003). (I'll be buying a second server in a few weeks once I get this all worked out. I didn't want to buy them both in case any issues came up. And did they!! )
So
server 1 = dc/data
server 2 = dc/mailThe three Kingston SSDs are 480GB capacity. So there would be more than enough space.
So what I am hearing is....
Dump the RAID 1 array completely. Move to an LSI controller. Install everything to the RAID 5 SSD array. Fast, easy, done.
Why a second server? Is that really warranted?
I agree completely. You already have 3 SSD drives, so in RAID 5 you have 960 GB of storage, and you're using 230 GB, plenty of room for growth.
I'd stick with one box.
-
You need to do a cost analysis to see if the cost of downtime from the server going down is significant enough to justify the cost of getting another server. That's a lot of money in hardware AND money in licenses, potentially.
In Windows licensing alone going down to two VMs on a single host like nearly $700 in savings.
-
@DustinB3403 said:
Virtualize every server you have, and run everything between the two host.
Virtualize Everything!
Exchange is best not virtualized.
-
@scottalanmiller said:
Why a second server? Is that really warranted?
For DC redundancy. I really don't want to roll with one DC. I guess I could just install another copy of Server on a desktop for that purpose, though. I already purchased two licenses for it.
I was also thinking that since they will all be virtualized, it would be good to have a second server-grade box to be able to install to if the other server goes down.
-
@Jason said:
Exchange is best not virtualized.
Why? What artifact of Exchange would make it be that way? This goes against both industry knowledge and how Microsoft runs their own Exchange servers.
-
@BRRABill said:
For DC redundancy. I really don't want to roll with one DC.
What makes you so dependent on Active Directory? I've had AD go down for two weeks and not one user even mentioned it. That's atypical, but my point is that on its own AD is designed to be able to go offline for long periods of time with little or no impact. What's the specific risk that you are facing?
-
@BRRABill said:
I was also thinking that since they will all be virtualized, it would be good to have a second server-grade box to be able to install to if the other server goes down.
Being virtualized makes them more reliable, not less, so while having the ability to failover is good when it makes financial sense and virtualization makes this easier, it also slightly reduces the need for it.
-
@scottalanmiller said:
@Jason said:
Exchange is best not virtualized.
Why? What artifact of Exchange would make it be that way? This goes against both industry knowledge and how Microsoft runs their own Exchange servers.
I should say not virtualized in the sense that it runs on shared storage and does automated Vmotion. Exchange level failovers are much better.
-
And I thought my head was spinning 2 hours ago!