When is SSD a MUST HAVE for server? thoughts? Discussion :D
-
@DustinB3403 said:
So, stupidly faster than what you were used to?
Oh yeah.
My numbers from the regular drives in there was all over the place, but probably pretty normal.
I posted them in this thread if anyone is interested:
http://www.mangolassi.it/topic/7458/swapping-drive-to-another-raid-controller/2
I posted different drives and also differenrt PERC cards.
The results don't make 100% sense to me.I've never tested the 10 year old servers I am currently using. That would be interesting.
-
@BRRABill said:
@scottalanmiller said:
The fastest 8 drive RAID 0 array on SAS 15K is only around 2,000 IOPS. Slowest SSD is normally around 25,000 IOPS.
My IOPS on the EDGE SSDs from the other day were
Random Read 4KiB (Q= 32,T= 1) : 387.262 MB/s [ 94546.4 IOPS]
Random Write 4KiB (Q= 32,T= 1) : 95.829 MB/s [ 23395.8 IOPS]Did you tweak the block size in the RAID array to optimize for a certain size of file? Would it make a lot of difference on an SSD?
I was tweaking it on the logging server I'm setting up and it made a TREMENDOUS difference on spinning rust.
-
@MattSpeller said:
Did you tweak the block size in the RAID array to optimize for a certain size of file? Would it make a lot of difference on an SSD?
I was tweaking it on the logging server I'm setting up and it made a TREMENDOUS difference on spinning rust.
No.
I posted those numbers with the hopes someone would chime in with that kind of info, but no one ever did, really. I htink it got lost because of the topic header.
Later today I will repost under a separate topic, I think.
-
@BRRABill said:
@MattSpeller said:
Did you tweak the block size in the RAID array to optimize for a certain size of file? Would it make a lot of difference on an SSD?
I was tweaking it on the logging server I'm setting up and it made a TREMENDOUS difference on spinning rust.
No.
I posted those numbers with the hopes someone would chime in with that kind of info, but no one ever did, really. I htink it got lost because of the topic header.
Later today I will repost under a separate topic, I think.
Please do, I'll share some results with a rust array for comparison if that's helpful
-
There is also the difference of using SSD for caching or for storage itself. If you are running 3TB of storage, you would need around 300GB of SSD cache. A cost effective alternative for going all flash.
-
@ardeyn said:
There is also the difference of using SSD for caching or for storage itself. If you are running 3TB of storage, you would need around 300GB of SSD cache. A cost effective alternative for going all flash.
Excellent point, but very dependant on if you've got a controller that supports it
-
@MattSpeller said:
@ardeyn said:
There is also the difference of using SSD for caching or for storage itself. If you are running 3TB of storage, you would need around 300GB of SSD cache. A cost effective alternative for going all flash.
Excellent point, but very dependant on if you've got a controller that supports it
Or software. Lots of people doing it in software too.
-
@scottalanmiller said:
@MattSpeller said:
@ardeyn said:
There is also the difference of using SSD for caching or for storage itself. If you are running 3TB of storage, you would need around 300GB of SSD cache. A cost effective alternative for going all flash.
Excellent point, but very dependant on if you've got a controller that supports it
Or software. Lots of people doing it in software too.
I thought of that a milisecond after I hit submit heheh
At what point would you say it's worth it to dump raid controllers and move to software? Might be a topic for another thread or a dedicated rant.
-
Definitely a topic for another thread, but mostly it comes down to the use case. Way better to have it on the controller for a lot of reasons, but more flexible in software. But if you don't have software that supports it, you are screwed.
-
@MattSpeller said:
@scottalanmiller said:
@MattSpeller said:
@ardeyn said:
There is also the difference of using SSD for caching or for storage itself. If you are running 3TB of storage, you would need around 300GB of SSD cache. A cost effective alternative for going all flash.
Excellent point, but very dependant on if you've got a controller that supports it
Or software. Lots of people doing it in software too.
I thought of that a milisecond after I hit submit heheh
At what point would you say it's worth it to dump raid controllers and move to software? Might be a topic for another thread or a dedicated rant.
I think the point in which you are considering dumping hardware raid controllers is at the point that you can run your business from backup power, without interruption.
I'd say if you have a power system so robust that your norm is "software raid" then you shouldn't even be wasting money on a hardware raid controller.
-
If you are opening a new thread can you link me to it. I would love to get involve
-
@scottalanmiller said:
@MattSpeller said:
@ardeyn said:
There is also the difference of using SSD for caching or for storage itself. If you are running 3TB of storage, you would need around 300GB of SSD cache. A cost effective alternative for going all flash.
Excellent point, but very dependant on if you've got a controller that supports it
Or software. Lots of people doing it in software too.
Can a software cache work with a hardware RAID? or do they have to be paired? (hardware with hardware, software with software?)
-
@Dashrender said:
@scottalanmiller said:
@MattSpeller said:
@ardeyn said:
There is also the difference of using SSD for caching or for storage itself. If you are running 3TB of storage, you would need around 300GB of SSD cache. A cost effective alternative for going all flash.
Excellent point, but very dependant on if you've got a controller that supports it
Or software. Lots of people doing it in software too.
Can a software cache work with a hardware RAID? or do they have to be paired? (hardware with hardware, software with software?)
To software the hardware RAID is just a drive, so it has no means of knowing that it is anything special.
That's the miracle of the block device interface system.
-
@scottalanmiller said:
@Dashrender said:
@scottalanmiller said:
@MattSpeller said:
@ardeyn said:
There is also the difference of using SSD for caching or for storage itself. If you are running 3TB of storage, you would need around 300GB of SSD cache. A cost effective alternative for going all flash.
Excellent point, but very dependant on if you've got a controller that supports it
Or software. Lots of people doing it in software too.
Can a software cache work with a hardware RAID? or do they have to be paired? (hardware with hardware, software with software?)
To software the hardware RAID is just a drive, so it has no means of knowing that it is anything special.
That's the miracle of the block device interface system.
Please tell me that you're saying that - if you're using a RAID card, then the card must support the use of SSD cache - otherwise I have no clue what you're trying to say.
-
@Dashrender said:
@scottalanmiller said:
@Dashrender said:
@scottalanmiller said:
@MattSpeller said:
@ardeyn said:
There is also the difference of using SSD for caching or for storage itself. If you are running 3TB of storage, you would need around 300GB of SSD cache. A cost effective alternative for going all flash.
Excellent point, but very dependant on if you've got a controller that supports it
Or software. Lots of people doing it in software too.
Can a software cache work with a hardware RAID? or do they have to be paired? (hardware with hardware, software with software?)
To software the hardware RAID is just a drive, so it has no means of knowing that it is anything special.
That's the miracle of the block device interface system.
Please tell me that you're saying that - if you're using a RAID card, then the card must support the use of SSD cache - otherwise I have no clue what you're trying to say.
Nope, sorry
I'm saying that if you use a software caching system, let's use ZFS as an example, you can attach the hardware RAID and ZFS will just think it is a single SATA or SAS drive - it has no idea that you have RAID. ZFS will then let you do a cache in memory and/or to an SSD to accelerate that RAID array because to ZFS it has no idea that you have RAID, it's just a normal hard drive.
-
@scottalanmiller said:
@Dashrender said:
@scottalanmiller said:
@Dashrender said:
@scottalanmiller said:
@MattSpeller said:
@ardeyn said:
There is also the difference of using SSD for caching or for storage itself. If you are running 3TB of storage, you would need around 300GB of SSD cache. A cost effective alternative for going all flash.
Excellent point, but very dependant on if you've got a controller that supports it
Or software. Lots of people doing it in software too.
Can a software cache work with a hardware RAID? or do they have to be paired? (hardware with hardware, software with software?)
To software the hardware RAID is just a drive, so it has no means of knowing that it is anything special.
That's the miracle of the block device interface system.
Please tell me that you're saying that - if you're using a RAID card, then the card must support the use of SSD cache - otherwise I have no clue what you're trying to say.
Nope, sorry
I'm saying that if you use a software caching system, let's use ZFS as an example, you can attach the hardware RAID and ZFS will just think it is a single SATA or SAS drive - it has no idea that you have RAID. ZFS will then let you do a cache in memory and/or to an SSD to accelerate that RAID array because to ZFS it has no idea that you have RAID, it's just a normal hard drive.
OK that makes sense.
Does Hyper-V, ESXi support this? I'm guessing that XS and KVM do, they can use ZFS for their file system of the VM storage (I'm assuming).
-
They all do to some degree, but all very differently.
-
Any opinions on VSAN's that have SSD caching? I mean, they give you a lot of other stuff, but what would you get in terms of performance?
-
@ardeyn said:
Any opinions on VSAN's that have SSD caching? I mean, they give you a lot of other stuff, but what would you get in terms of performance?
Good question for @original_anvil and he does this. But it gives you a ton, the same as you would get, more or less, with any caching system. Getting high performance cache close to where it is used (the closer the better) the bigger the performance leap. VSAN has the same bottlenecks from the disks that any other storage technology does. If your VSAN is pure SSD, then an SSD cache would do pretty little (nothing) but if your VSAN is spinning disks, then an SSD cache would have the normal acceleration advantages.
If you were willing to have your SSD cache do write commits without getting data flushed to the VSAN and replicated to other nodes, you could get insane performance improvements, of course, but that would come with extreme risk that would pretty much defeat the VSAN's purpose. But from a read perspective, the speed ups are identical to any other.
-
@scottalanmiller Thanks for bringing me in!
@ardeyn So, yeah, as Scott said, StarWind Virtual SAN (aka StarWind VSAN), allows using SSDs as one of the tiers of the cache, Level 2 to more exact. So, combination of RAM as the L1 caching and Flash cache gives really good performance boost. The exact numbers actually depends on the workload set, so I just don`t want to misslead you here. BTW, the data within the cache synchronizes across all the nodes, so we are free to claim that we do Fault Tolerance in the cache level. Anyway, here is a bit more information about Server Side caching:
https://www.starwindsoftware.com/caching-pageLet me know if there is anything else that I might be useful for you.