PCIe SSD vs SAS SSD
-
@scottalanmiller said:
In high end servers, Oracle M class as an example, you do dual controllers for storage as part of its "mini" class features. But they use completely discrete controllers and software RAID. Things that you can't really do in the SMB and commodity hardware classes.
I've actually never worked on a Unix server that didn't have something like this running. Granted It's always software raid, I've never had a hardware raid in Unix
-
@JasonNM said:
@scottalanmiller said:
In high end servers, Oracle M class as an example, you do dual controllers for storage as part of its "mini" class features. But they use completely discrete controllers and software RAID. Things that you can't really do in the SMB and commodity hardware classes.
I've actually never worked on a Unix server that didn't have something like this running. Granted It's always software raid, I've never had a hardware raid in Unix
Anything from Oracle or IBM I would expect it. The one that might be an exception is HP with their HP-UX Itanium systems, I'm not sure if they do the dual software controllers. I've heard rumours that they rely on the same SmartArrays, at least in the smaller units (1U and 2U) as the Proliants do.
-
@scottalanmiller said:
A PCIe SSD is a controller plus drives all in one. SAS is after the controller stage. So we are comparing apples and oranges.
I don't consider it different because I was comparing the whole package, controller and SAS drives vs PCIe SSD.
Have the prices for PCIe SSD drives come down to make the the standard? I only saw the Intel prices in the SW thread, $2300. That seems pretty good for 1.2 TB and it only uses one PCIe slot. I guess I'll need to break out the prices of creating a 1.2 TB of RAID 10 storage for a price comparison.
-
PCIe drives are still rather expensive. Most people already have RAID controllers so that is hard to compare. If you get to not drop $800 on a controller plus a minimum of two SSDs, the prices are reasonable. FusionIO is the leader in PCIe drives and always has been, but they are rather high end.
Disadvantages are that you lose hot swap and easy expansion. Upsides are that moving to 1U chassis is easier.
-
Right now most people looking at PCIe SSD are still doing so because they want to optimize databases. It's still a premium option. But that will start to change quickly, as the prices come down, capacities go up and the SATA / SAS bottleneck becomes more and more significant.
-
Are those bottle necks coming for SMB? Or is that not really likely? I guess bottle neck could come as we virtualize more and more VMs into a single host. I know we already have examples of businesses running 30+ VMs on a single host, but what about VMs that don't require much storage, but require more disk IO, like DB servers. I suppose if you're company has those types of IO constraints, they can probably afford the Fusion IO stuff already. So again takes them out of this conversation.
-
@Dashrender said:
I know we already have examples of businesses running 30+ VMs on a single host,
Those were the "decade ago" numbers. Thirty was small even by the mid to late 2000s. If you have need of the VMs, hundreds is not an issue on modern hardware and virtualization platforms. The number virtualized is almost always a limitation of the needs of the company rather than the scalability of the hardware. Now that we can get systems with TB of RAM and eight sixteen core CPUs the number of VMs per host is pretty much a non-factor except for the biggest workloads.
-
We estimated a few years ago that if we needed to scale to over 1,000 PBXs per chassis that we could do it with 2U Oracle boxes. That's 1,000 fully independent Linux servers each running a full PBX all on a two CPU system!
-
@Dashrender said:
Are those bottle necks coming for SMB? Or is that not really likely?
Well the bottlenecks exist, the question is if they matter enough to pay to make them go away. And overall, not too often. Where the money tends to make sense is when you have a large hypervisor host using traditional SAS drives (HD or SSD or mixed) on a RAID controller for the bulk of workloads and PCIe with million or more IOPS for one or two really critical, high performance workloads.
-
@Dashrender said:
I suppose if you're company has those types of IO constraints, they can probably afford the Fusion IO stuff already. So again takes them out of this conversation.
Generally, yes. If you need a million IOPS on a single box, chances are you are making money somewhere
-
I didn't mention this in the OP, but the point of this discussion was geared toward SMB. Unless the SMB's goal is to provide PBX, they wouldn't have loads like the one your mentioning. Nor would they have the resources to purchase an 8 proc 16 core, etc, etc box.
-
@Dashrender said:
I didn't mention this in the OP, but the point of this discussion was geared toward SMB. Unless the SMB's goal is to provide PBX, they wouldn't have loads like the one your mentioning. Nor would they have the resources to purchase an 8 proc 16 core, etc, etc box.
No, of course not, but the 30 VM limit per box was my point, that's not a limitation and has not been for a very long time. In the SMB, though, people often feel that there are constraints that rarely exist. Like even worrying about VM count per server. It's about load, not VM count. Typically SMB loads are very small per VM. Things like web servers, application servers, active directory, even most SMB databases are completely tiny. Putting hundreds onto a single server isn't a bit deal.
-
@Dashrender said:
Nor would they have the resources to purchase an 8 proc 16 core, etc, etc box.
But if they had the workloads, they would, as that is often the cheapest path to hosting them.
-
@scottalanmiller said:
@Dashrender said:
I didn't mention this in the OP, but the point of this discussion was geared toward SMB. Unless the SMB's goal is to provide PBX, they wouldn't have loads like the one your mentioning. Nor would they have the resources to purchase an 8 proc 16 core, etc, etc box.
No, of course not, but the 30 VM limit per box was my point, that's not a limitation and has not been for a very long time. In the SMB, though, people often feel that there are constraints that rarely exist. Like even worrying about VM count per server. It's about load, not VM count. Typically SMB loads are very small per VM. Things like web servers, application servers, active directory, even most SMB databases are completely tiny. Putting hundreds onto a single server isn't a bit deal.
I picked that as a conservative number, nothing something that was constraining myself. I realize that it's load, do I have enough IOPs, CPU, RAM to get the job done. Of course as you have a need, the larger and larger boxes exist to provide the resources. Though I'm guessing that because of the organic nature of SMBs, those types of boxes would rarely if ever been seen there.
-
Rarely but they are seen. At least once a fortnight, I would guess, I speak to someone looking at the quad processor boxes or larger (roughly double to quadruple the standard SMB server size.) Still rare, but not super rare. Once you get past the capacity needs of a single standard box it often, nearly always in fact, makes sense to scale vertically until you just can't stay in a single box anymore. This is why enterprises often use massive UNIX and mainframe systems. The power in a single box is just so valuable.
-
@Dashrender said:
I wasn't aware that PCIe SSD were as reliable as RAID based SAS SSD until I ran across a thread on SW this morning.
These devices are (or at least were) normally more than 100% more than the cost of similar sized SAS SSD and the accompanying RAID controller.
Now obviously we IT folks in general have found that single RAID controllers are reliable enough to not warrant having a backup within the same chassis, we find that it's just as likely to have a whole die as to have the RAID controller die, so instead of backing up the RAID controller we backup the whole system to cover those situations.
I've done a little reading now, it seems that the resiliencies of PCIe SSDs are approximately equivalent to a RAIDed setup, and the controller is at least equivalent to normal RAID controllers.
As the costs come down, these seem like the clear winner.
What is your experience?
We recommend using PCIe and newer NVMes for cache. You can go all-flash of course but that's more expensive than desired typically. Back to SAS SSDs they are overpriced IMHO compared to their SATA siblings.
-
@scottalanmiller said:
Rarely but they are seen. At least once a fortnight, I would guess, I speak to someone looking at the quad processor boxes or larger (roughly double to quadruple the standard SMB server size.) Still rare, but not super rare. Once you get past the capacity needs of a single standard box it often, nearly always in fact, makes sense to scale vertically until you just can't stay in a single box anymore. This is why enterprises often use massive UNIX and mainframe systems. The power in a single box is just so valuable.
Sure, if you're starting from scratch or it's refresh time anyhow, but in SMB that's rarely the case. That said, the recent example on Spiceworks where this question/conversation spawned from, shows that re-engineering your current setup with more RAM and a better disk subsystem is often very viable, and probably overlooked by many SMBs.
-
@KOOLER said:
@Dashrender said:
I wasn't aware that PCIe SSD were as reliable as RAID based SAS SSD until I ran across a thread on SW this morning.
These devices are (or at least were) normally more than 100% more than the cost of similar sized SAS SSD and the accompanying RAID controller.
Now obviously we IT folks in general have found that single RAID controllers are reliable enough to not warrant having a backup within the same chassis, we find that it's just as likely to have a whole die as to have the RAID controller die, so instead of backing up the RAID controller we backup the whole system to cover those situations.
I've done a little reading now, it seems that the resiliencies of PCIe SSDs are approximately equivalent to a RAIDed setup, and the controller is at least equivalent to normal RAID controllers.
As the costs come down, these seem like the clear winner.
What is your experience?
We recommend using PCIe and newer NVMes for cache. You can go all-flash of course but that's more expensive than desired typically. Back to SAS SSDs they are overpriced IMHO compared to their SATA siblings.
Do you recommend SATA SSDs over SAS SSDs because the price point vs performance/failure rate rarely warrants it?
-
SATA SSDs often make sense because they are cheaper and the protocol benefits of SAS really are not there in the same ways for SSDs.
-
@Dashrender said:
@KOOLER said:
@Dashrender said:
I wasn't aware that PCIe SSD were as reliable as RAID based SAS SSD until I ran across a thread on SW this morning.
These devices are (or at least were) normally more than 100% more than the cost of similar sized SAS SSD and the accompanying RAID controller.
Now obviously we IT folks in general have found that single RAID controllers are reliable enough to not warrant having a backup within the same chassis, we find that it's just as likely to have a whole die as to have the RAID controller die, so instead of backing up the RAID controller we backup the whole system to cover those situations.
I've done a little reading now, it seems that the resiliencies of PCIe SSDs are approximately equivalent to a RAIDed setup, and the controller is at least equivalent to normal RAID controllers.
As the costs come down, these seem like the clear winner.
What is your experience?
We recommend using PCIe and newer NVMes for cache. You can go all-flash of course but that's more expensive than desired typically. Back to SAS SSDs they are overpriced IMHO compared to their SATA siblings.
Do you recommend SATA SSDs over SAS SSDs because the price point vs performance/failure rate rarely warrants it?
Scott had replied below. Key point with commodity hardware is - economy of scale kicks in, price goes down dramatically so typically more expensive stuff has not that great $/TB or $/IOPS rate compared to COTS one.