When is SSD a MUST HAVE for server? thoughts? Discussion :D
-
technically the answer is NEVER. it's never a must. if it were....
-
@LAH3385 said:
@scottalanmiller said:
Here is a quick guide, however:
- File Servers: Currently almost always Winchesters because capacity is what matters.
- App Servers: Winchesters normally because everything gets loaded into memory and disk speed doesn't matter.
- Database Servers: Almost always SSDs because IOPS matter and little else.
- Terminal Servers and VDI: Almost always SSD because speed matters and capacity does not and dedupe is very effective.
I forgot to mention. The server is actually a hypervisor with VM (Hyper-V) acting as File Server. Not sure if that make any different. I'm guessing it falls under VDI.
How would that fall under VDI? You said it was a file server, it would be a file server.
-
@scottalanmiller said:
@LAH3385 said:
@scottalanmiller said:
Here is a quick guide, however:
- File Servers: Currently almost always Winchesters because capacity is what matters.
- App Servers: Winchesters normally because everything gets loaded into memory and disk speed doesn't matter.
- Database Servers: Almost always SSDs because IOPS matter and little else.
- Terminal Servers and VDI: Almost always SSD because speed matters and capacity does not and dedupe is very effective.
I forgot to mention. The server is actually a hypervisor with VM (Hyper-V) acting as File Server. Not sure if that make any different. I'm guessing it falls under VDI.
How would that fall under VDI? You said it was a file server, it would be a file server.
Yeah. My bad. Just read more about VDI and it doesn't apply to us
-
Cost of SSD
Current IOPS held back by spinning rust
Future IOPS requirements
Supporting hardware (RAID controller upgrade? 3.5" to 2.5" adapters?)Add all that up, so to speak. Then subtract the cost of a whizzing rust array. If cost <= benefit, purchase.
-
typically a single SSD will provide more IOPs than an entire 8 drive arrary of spinning rust will. At that point it's about bus bandwidth and price.
-
@Dashrender said:
typically a single SSD will provide more IOPs than an entire 8 drive arrary of spinning rust will. At that point it's about bus bandwidth and price.
And by typical, he means "any we've ever heard of."
-
The fastest 8 drive RAID 0 array on SAS 15K is only around 2,000 IOPS. Slowest SSD is normally around 25,000 IOPS.
-
@scottalanmiller said:
The fastest 8 drive RAID 0 array on SAS 15K is only around 2,000 IOPS. Slowest SSD is normally around 25,000 IOPS.
My IOPS on the EDGE SSDs from the other day were
Random Read 4KiB (Q= 32,T= 1) : 387.262 MB/s [ 94546.4 IOPS]
Random Write 4KiB (Q= 32,T= 1) : 95.829 MB/s [ 23395.8 IOPS] -
@BRRABill said:
@scottalanmiller said:
The fastest 8 drive RAID 0 array on SAS 15K is only around 2,000 IOPS. Slowest SSD is normally around 25,000 IOPS.
My IOPS on the EDGE SSDs from the other day were
Random Read 4KiB (Q= 32,T= 1) : 387.262 MB/s [ 94546.4 IOPS]
Random Write 4KiB (Q= 32,T= 1) : 95.829 MB/s [ 23395.8 IOPS]So, stupidly faster than what you were used to?
-
@DustinB3403 said:
So, stupidly faster than what you were used to?
Oh yeah.
My numbers from the regular drives in there was all over the place, but probably pretty normal.
I posted them in this thread if anyone is interested:
http://www.mangolassi.it/topic/7458/swapping-drive-to-another-raid-controller/2
I posted different drives and also differenrt PERC cards.
The results don't make 100% sense to me.I've never tested the 10 year old servers I am currently using. That would be interesting.
-
@BRRABill said:
@scottalanmiller said:
The fastest 8 drive RAID 0 array on SAS 15K is only around 2,000 IOPS. Slowest SSD is normally around 25,000 IOPS.
My IOPS on the EDGE SSDs from the other day were
Random Read 4KiB (Q= 32,T= 1) : 387.262 MB/s [ 94546.4 IOPS]
Random Write 4KiB (Q= 32,T= 1) : 95.829 MB/s [ 23395.8 IOPS]Did you tweak the block size in the RAID array to optimize for a certain size of file? Would it make a lot of difference on an SSD?
I was tweaking it on the logging server I'm setting up and it made a TREMENDOUS difference on spinning rust.
-
@MattSpeller said:
Did you tweak the block size in the RAID array to optimize for a certain size of file? Would it make a lot of difference on an SSD?
I was tweaking it on the logging server I'm setting up and it made a TREMENDOUS difference on spinning rust.
No.
I posted those numbers with the hopes someone would chime in with that kind of info, but no one ever did, really. I htink it got lost because of the topic header.
Later today I will repost under a separate topic, I think.
-
@BRRABill said:
@MattSpeller said:
Did you tweak the block size in the RAID array to optimize for a certain size of file? Would it make a lot of difference on an SSD?
I was tweaking it on the logging server I'm setting up and it made a TREMENDOUS difference on spinning rust.
No.
I posted those numbers with the hopes someone would chime in with that kind of info, but no one ever did, really. I htink it got lost because of the topic header.
Later today I will repost under a separate topic, I think.
Please do, I'll share some results with a rust array for comparison if that's helpful
-
There is also the difference of using SSD for caching or for storage itself. If you are running 3TB of storage, you would need around 300GB of SSD cache. A cost effective alternative for going all flash.
-
@ardeyn said:
There is also the difference of using SSD for caching or for storage itself. If you are running 3TB of storage, you would need around 300GB of SSD cache. A cost effective alternative for going all flash.
Excellent point, but very dependant on if you've got a controller that supports it
-
@MattSpeller said:
@ardeyn said:
There is also the difference of using SSD for caching or for storage itself. If you are running 3TB of storage, you would need around 300GB of SSD cache. A cost effective alternative for going all flash.
Excellent point, but very dependant on if you've got a controller that supports it
Or software. Lots of people doing it in software too.
-
@scottalanmiller said:
@MattSpeller said:
@ardeyn said:
There is also the difference of using SSD for caching or for storage itself. If you are running 3TB of storage, you would need around 300GB of SSD cache. A cost effective alternative for going all flash.
Excellent point, but very dependant on if you've got a controller that supports it
Or software. Lots of people doing it in software too.
I thought of that a milisecond after I hit submit heheh
At what point would you say it's worth it to dump raid controllers and move to software? Might be a topic for another thread or a dedicated rant.
-
Definitely a topic for another thread, but mostly it comes down to the use case. Way better to have it on the controller for a lot of reasons, but more flexible in software. But if you don't have software that supports it, you are screwed.
-
@MattSpeller said:
@scottalanmiller said:
@MattSpeller said:
@ardeyn said:
There is also the difference of using SSD for caching or for storage itself. If you are running 3TB of storage, you would need around 300GB of SSD cache. A cost effective alternative for going all flash.
Excellent point, but very dependant on if you've got a controller that supports it
Or software. Lots of people doing it in software too.
I thought of that a milisecond after I hit submit heheh
At what point would you say it's worth it to dump raid controllers and move to software? Might be a topic for another thread or a dedicated rant.
I think the point in which you are considering dumping hardware raid controllers is at the point that you can run your business from backup power, without interruption.
I'd say if you have a power system so robust that your norm is "software raid" then you shouldn't even be wasting money on a hardware raid controller.
-
If you are opening a new thread can you link me to it. I would love to get involve