PCIe SSD vs SAS SSD
-
@Dashrender said:
Are those bottle necks coming for SMB? Or is that not really likely?
Well the bottlenecks exist, the question is if they matter enough to pay to make them go away. And overall, not too often. Where the money tends to make sense is when you have a large hypervisor host using traditional SAS drives (HD or SSD or mixed) on a RAID controller for the bulk of workloads and PCIe with million or more IOPS for one or two really critical, high performance workloads.
-
@Dashrender said:
I suppose if you're company has those types of IO constraints, they can probably afford the Fusion IO stuff already. So again takes them out of this conversation.
Generally, yes. If you need a million IOPS on a single box, chances are you are making money somewhere
-
I didn't mention this in the OP, but the point of this discussion was geared toward SMB. Unless the SMB's goal is to provide PBX, they wouldn't have loads like the one your mentioning. Nor would they have the resources to purchase an 8 proc 16 core, etc, etc box.
-
@Dashrender said:
I didn't mention this in the OP, but the point of this discussion was geared toward SMB. Unless the SMB's goal is to provide PBX, they wouldn't have loads like the one your mentioning. Nor would they have the resources to purchase an 8 proc 16 core, etc, etc box.
No, of course not, but the 30 VM limit per box was my point, that's not a limitation and has not been for a very long time. In the SMB, though, people often feel that there are constraints that rarely exist. Like even worrying about VM count per server. It's about load, not VM count. Typically SMB loads are very small per VM. Things like web servers, application servers, active directory, even most SMB databases are completely tiny. Putting hundreds onto a single server isn't a bit deal.
-
@Dashrender said:
Nor would they have the resources to purchase an 8 proc 16 core, etc, etc box.
But if they had the workloads, they would, as that is often the cheapest path to hosting them.
-
@scottalanmiller said:
@Dashrender said:
I didn't mention this in the OP, but the point of this discussion was geared toward SMB. Unless the SMB's goal is to provide PBX, they wouldn't have loads like the one your mentioning. Nor would they have the resources to purchase an 8 proc 16 core, etc, etc box.
No, of course not, but the 30 VM limit per box was my point, that's not a limitation and has not been for a very long time. In the SMB, though, people often feel that there are constraints that rarely exist. Like even worrying about VM count per server. It's about load, not VM count. Typically SMB loads are very small per VM. Things like web servers, application servers, active directory, even most SMB databases are completely tiny. Putting hundreds onto a single server isn't a bit deal.
I picked that as a conservative number, nothing something that was constraining myself. I realize that it's load, do I have enough IOPs, CPU, RAM to get the job done. Of course as you have a need, the larger and larger boxes exist to provide the resources. Though I'm guessing that because of the organic nature of SMBs, those types of boxes would rarely if ever been seen there.
-
Rarely but they are seen. At least once a fortnight, I would guess, I speak to someone looking at the quad processor boxes or larger (roughly double to quadruple the standard SMB server size.) Still rare, but not super rare. Once you get past the capacity needs of a single standard box it often, nearly always in fact, makes sense to scale vertically until you just can't stay in a single box anymore. This is why enterprises often use massive UNIX and mainframe systems. The power in a single box is just so valuable.
-
@Dashrender said:
I wasn't aware that PCIe SSD were as reliable as RAID based SAS SSD until I ran across a thread on SW this morning.
These devices are (or at least were) normally more than 100% more than the cost of similar sized SAS SSD and the accompanying RAID controller.
Now obviously we IT folks in general have found that single RAID controllers are reliable enough to not warrant having a backup within the same chassis, we find that it's just as likely to have a whole die as to have the RAID controller die, so instead of backing up the RAID controller we backup the whole system to cover those situations.
I've done a little reading now, it seems that the resiliencies of PCIe SSDs are approximately equivalent to a RAIDed setup, and the controller is at least equivalent to normal RAID controllers.
As the costs come down, these seem like the clear winner.
What is your experience?
We recommend using PCIe and newer NVMes for cache. You can go all-flash of course but that's more expensive than desired typically. Back to SAS SSDs they are overpriced IMHO compared to their SATA siblings.
-
@scottalanmiller said:
Rarely but they are seen. At least once a fortnight, I would guess, I speak to someone looking at the quad processor boxes or larger (roughly double to quadruple the standard SMB server size.) Still rare, but not super rare. Once you get past the capacity needs of a single standard box it often, nearly always in fact, makes sense to scale vertically until you just can't stay in a single box anymore. This is why enterprises often use massive UNIX and mainframe systems. The power in a single box is just so valuable.
Sure, if you're starting from scratch or it's refresh time anyhow, but in SMB that's rarely the case. That said, the recent example on Spiceworks where this question/conversation spawned from, shows that re-engineering your current setup with more RAM and a better disk subsystem is often very viable, and probably overlooked by many SMBs.
-
@KOOLER said:
@Dashrender said:
I wasn't aware that PCIe SSD were as reliable as RAID based SAS SSD until I ran across a thread on SW this morning.
These devices are (or at least were) normally more than 100% more than the cost of similar sized SAS SSD and the accompanying RAID controller.
Now obviously we IT folks in general have found that single RAID controllers are reliable enough to not warrant having a backup within the same chassis, we find that it's just as likely to have a whole die as to have the RAID controller die, so instead of backing up the RAID controller we backup the whole system to cover those situations.
I've done a little reading now, it seems that the resiliencies of PCIe SSDs are approximately equivalent to a RAIDed setup, and the controller is at least equivalent to normal RAID controllers.
As the costs come down, these seem like the clear winner.
What is your experience?
We recommend using PCIe and newer NVMes for cache. You can go all-flash of course but that's more expensive than desired typically. Back to SAS SSDs they are overpriced IMHO compared to their SATA siblings.
Do you recommend SATA SSDs over SAS SSDs because the price point vs performance/failure rate rarely warrants it?
-
SATA SSDs often make sense because they are cheaper and the protocol benefits of SAS really are not there in the same ways for SSDs.
-
@Dashrender said:
@KOOLER said:
@Dashrender said:
I wasn't aware that PCIe SSD were as reliable as RAID based SAS SSD until I ran across a thread on SW this morning.
These devices are (or at least were) normally more than 100% more than the cost of similar sized SAS SSD and the accompanying RAID controller.
Now obviously we IT folks in general have found that single RAID controllers are reliable enough to not warrant having a backup within the same chassis, we find that it's just as likely to have a whole die as to have the RAID controller die, so instead of backing up the RAID controller we backup the whole system to cover those situations.
I've done a little reading now, it seems that the resiliencies of PCIe SSDs are approximately equivalent to a RAIDed setup, and the controller is at least equivalent to normal RAID controllers.
As the costs come down, these seem like the clear winner.
What is your experience?
We recommend using PCIe and newer NVMes for cache. You can go all-flash of course but that's more expensive than desired typically. Back to SAS SSDs they are overpriced IMHO compared to their SATA siblings.
Do you recommend SATA SSDs over SAS SSDs because the price point vs performance/failure rate rarely warrants it?
Scott had replied below. Key point with commodity hardware is - economy of scale kicks in, price goes down dramatically so typically more expensive stuff has not that great $/TB or $/IOPS rate compared to COTS one.
-
@scottalanmiller said:
SATA SSDs often make sense because they are cheaper and the protocol benefits of SAS really are not there in the same ways for SSDs.
So this brings up @DustinB3403 question of best practice or more aptly as you @scottalanmiller said implementation patterns - which ones as generically as you can be would go which way. I suppose it might be easier to say 'in these set of circumstances (list 1-4) you do this, generally in the rest you do that.'