Why are local drives better
-
@Grey said in Why are local drives better:
@Dashrender said in Why are local drives better:
@Grey said in Why are local drives better:
ay max at 3, 6 or 3.2 16 Gbit/s (1969 MB/s). Ergo, a 10GB FCoE SAN that's running OBR10 or Raid6 with a large SSD/RAM c
You can literally use local disk for any thing you can use remote disk. So I'm not really sure what you're digging for.
Transfer rates. A local bus will max at 6, while a SAN on a 10 GB link (or dual 10s, whatever) can go higher as the node can cache in RAM and then write back to the drives that are local to the SAN at the slower, local rate.
So, just like the cache on a local controller?
-
@travisdh1 said in Why are local drives better:
@Grey said in Why are local drives better:
@Dashrender said in Why are local drives better:
@Grey said in Why are local drives better:
ay max at 3, 6 or 3.2 16 Gbit/s (1969 MB/s). Ergo, a 10GB FCoE SAN that's running OBR10 or Raid6 with a large SSD/RAM c
You can literally use local disk for any thing you can use remote disk. So I'm not really sure what you're digging for.
Transfer rates. A local bus will max at 6, while a SAN on a 10 GB link (or dual 10s, whatever) can go higher as the node can cache in RAM and then write back to the drives that are local to the SAN at the slower, local rate.
So, just like the cache on a local controller?
Does your local controller have a 256gb cache?
-
@Grey said in Why are local drives better:
@Dashrender said in Why are local drives better:
@Grey said in Why are local drives better:
ay max at 3, 6 or 3.2 16 Gbit/s (1969 MB/s). Ergo, a 10GB FCoE SAN that's running OBR10 or Raid6 with a large SSD/RAM c
You can literally use local disk for any thing you can use remote disk. So I'm not really sure what you're digging for.
Transfer rates. A local bus will max at 6, while a SAN on a 10 GB link (or dual 10s, whatever) can go higher as the node can cache in RAM and then write back to the drives that are local to the SAN at the slower, local rate.
Local bus can always go faster. Any remote disk always has to traverse the local bus, but with higher latency. Remember that any remote disk is local to itself. So any local limitation applies universally, and all remote limitations are on top of that.
-
@Grey said in Why are local drives better:
@travisdh1 said in Why are local drives better:
@Grey said in Why are local drives better:
@Dashrender said in Why are local drives better:
@Grey said in Why are local drives better:
ay max at 3, 6 or 3.2 16 Gbit/s (1969 MB/s). Ergo, a 10GB FCoE SAN that's running OBR10 or Raid6 with a large SSD/RAM c
You can literally use local disk for any thing you can use remote disk. So I'm not really sure what you're digging for.
Transfer rates. A local bus will max at 6, while a SAN on a 10 GB link (or dual 10s, whatever) can go higher as the node can cache in RAM and then write back to the drives that are local to the SAN at the slower, local rate.
So, just like the cache on a local controller?
Does your local controller have a 256gb cache?
If I've got that much ram assigned to it, sure, why not?
-
@Grey said in Why are local drives better:
@travisdh1 said in Why are local drives better:
@Grey said in Why are local drives better:
@Dashrender said in Why are local drives better:
@Grey said in Why are local drives better:
ay max at 3, 6 or 3.2 16 Gbit/s (1969 MB/s). Ergo, a 10GB FCoE SAN that's running OBR10 or Raid6 with a large SSD/RAM c
You can literally use local disk for any thing you can use remote disk. So I'm not really sure what you're digging for.
Transfer rates. A local bus will max at 6, while a SAN on a 10 GB link (or dual 10s, whatever) can go higher as the node can cache in RAM and then write back to the drives that are local to the SAN at the slower, local rate.
So, just like the cache on a local controller?
Does your local controller have a 256gb cache?
It can. If your SAN can have a cache that big, your local storage can. Because that cache is local on the SAN.
-
@travisdh1 said in Why are local drives better:
@Grey said in Why are local drives better:
@travisdh1 said in Why are local drives better:
@Grey said in Why are local drives better:
@Dashrender said in Why are local drives better:
@Grey said in Why are local drives better:
ay max at 3, 6 or 3.2 16 Gbit/s (1969 MB/s). Ergo, a 10GB FCoE SAN that's running OBR10 or Raid6 with a large SSD/RAM c
You can literally use local disk for any thing you can use remote disk. So I'm not really sure what you're digging for.
Transfer rates. A local bus will max at 6, while a SAN on a 10 GB link (or dual 10s, whatever) can go higher as the node can cache in RAM and then write back to the drives that are local to the SAN at the slower, local rate.
So, just like the cache on a local controller?
Does your local controller have a 256gb cache?
If I've got that much ram assigned to it, sure, why not?
I've worked on local storage systems that have that much cache just recently, in fact.
-
@scottalanmiller said in Why are local drives better:
@travisdh1 said in Why are local drives better:
@Grey said in Why are local drives better:
@travisdh1 said in Why are local drives better:
@Grey said in Why are local drives better:
@Dashrender said in Why are local drives better:
@Grey said in Why are local drives better:
ay max at 3, 6 or 3.2 16 Gbit/s (1969 MB/s). Ergo, a 10GB FCoE SAN that's running OBR10 or Raid6 with a large SSD/RAM c
You can literally use local disk for any thing you can use remote disk. So I'm not really sure what you're digging for.
Transfer rates. A local bus will max at 6, while a SAN on a 10 GB link (or dual 10s, whatever) can go higher as the node can cache in RAM and then write back to the drives that are local to the SAN at the slower, local rate.
So, just like the cache on a local controller?
Does your local controller have a 256gb cache?
If I've got that much ram assigned to it, sure, why not?
I've worked on local storage systems that have that much cache just recently, in fact.
Nice. Was that a hardware controller or software based?
-
@scottalanmiller said in Why are local drives better:
@travisdh1 said in Why are local drives better:
@Grey said in Why are local drives better:
@travisdh1 said in Why are local drives better:
@Grey said in Why are local drives better:
@Dashrender said in Why are local drives better:
@Grey said in Why are local drives better:
ay max at 3, 6 or 3.2 16 Gbit/s (1969 MB/s). Ergo, a 10GB FCoE SAN that's running OBR10 or Raid6 with a large SSD/RAM c
You can literally use local disk for any thing you can use remote disk. So I'm not really sure what you're digging for.
Transfer rates. A local bus will max at 6, while a SAN on a 10 GB link (or dual 10s, whatever) can go higher as the node can cache in RAM and then write back to the drives that are local to the SAN at the slower, local rate.
So, just like the cache on a local controller?
Does your local controller have a 256gb cache?
If I've got that much ram assigned to it, sure, why not?
I've worked on local storage systems that have that much cache just recently, in fact.
Cool. Many of the systems I've seen deployed and worked with are so old that a 1gb cache is considered exotic.
-
@travisdh1 said in Why are local drives better:
@scottalanmiller said in Why are local drives better:
@travisdh1 said in Why are local drives better:
@Grey said in Why are local drives better:
@travisdh1 said in Why are local drives better:
@Grey said in Why are local drives better:
@Dashrender said in Why are local drives better:
@Grey said in Why are local drives better:
ay max at 3, 6 or 3.2 16 Gbit/s (1969 MB/s). Ergo, a 10GB FCoE SAN that's running OBR10 or Raid6 with a large SSD/RAM c
You can literally use local disk for any thing you can use remote disk. So I'm not really sure what you're digging for.
Transfer rates. A local bus will max at 6, while a SAN on a 10 GB link (or dual 10s, whatever) can go higher as the node can cache in RAM and then write back to the drives that are local to the SAN at the slower, local rate.
So, just like the cache on a local controller?
Does your local controller have a 256gb cache?
If I've got that much ram assigned to it, sure, why not?
I've worked on local storage systems that have that much cache just recently, in fact.
Nice. Was that a hardware controller or software based?
Software. Which is pretty much the only thing for enterprise systems.
-
@Grey said in Why are local drives better:
@scottalanmiller said in Why are local drives better:
@travisdh1 said in Why are local drives better:
@Grey said in Why are local drives better:
@travisdh1 said in Why are local drives better:
@Grey said in Why are local drives better:
@Dashrender said in Why are local drives better:
@Grey said in Why are local drives better:
ay max at 3, 6 or 3.2 16 Gbit/s (1969 MB/s). Ergo, a 10GB FCoE SAN that's running OBR10 or Raid6 with a large SSD/RAM c
You can literally use local disk for any thing you can use remote disk. So I'm not really sure what you're digging for.
Transfer rates. A local bus will max at 6, while a SAN on a 10 GB link (or dual 10s, whatever) can go higher as the node can cache in RAM and then write back to the drives that are local to the SAN at the slower, local rate.
So, just like the cache on a local controller?
Does your local controller have a 256gb cache?
If I've got that much ram assigned to it, sure, why not?
I've worked on local storage systems that have that much cache just recently, in fact.
Cool. Many of the systems I've seen deployed and worked with are so old that a 1gb cache is considered exotic.
That's only on commodity hardware with hardware RAID. That's generally what SAN and NAS skip to get around those limitations.
-
@scottalanmiller said in Why are local drives better:
@Grey said in Why are local drives better:
@scottalanmiller said in Why are local drives better:
@travisdh1 said in Why are local drives better:
@Grey said in Why are local drives better:
@travisdh1 said in Why are local drives better:
@Grey said in Why are local drives better:
@Dashrender said in Why are local drives better:
@Grey said in Why are local drives better:
ay max at 3, 6 or 3.2 16 Gbit/s (1969 MB/s). Ergo, a 10GB FCoE SAN that's running OBR10 or Raid6 with a large SSD/RAM c
You can literally use local disk for any thing you can use remote disk. So I'm not really sure what you're digging for.
Transfer rates. A local bus will max at 6, while a SAN on a 10 GB link (or dual 10s, whatever) can go higher as the node can cache in RAM and then write back to the drives that are local to the SAN at the slower, local rate.
So, just like the cache on a local controller?
Does your local controller have a 256gb cache?
If I've got that much ram assigned to it, sure, why not?
I've worked on local storage systems that have that much cache just recently, in fact.
Cool. Many of the systems I've seen deployed and worked with are so old that a 1gb cache is considered exotic.
That's only on commodity hardware with hardware RAID. That's generally what SAN and NAS skip to get around those limitations.
-
@scottalanmiller said in Why are local drives better:
@travisdh1 said in Why are local drives better:
@scottalanmiller said in Why are local drives better:
@travisdh1 said in Why are local drives better:
@Grey said in Why are local drives better:
@travisdh1 said in Why are local drives better:
@Grey said in Why are local drives better:
@Dashrender said in Why are local drives better:
@Grey said in Why are local drives better:
ay max at 3, 6 or 3.2 16 Gbit/s (1969 MB/s). Ergo, a 10GB FCoE SAN that's running OBR10 or Raid6 with a large SSD/RAM c
You can literally use local disk for any thing you can use remote disk. So I'm not really sure what you're digging for.
Transfer rates. A local bus will max at 6, while a SAN on a 10 GB link (or dual 10s, whatever) can go higher as the node can cache in RAM and then write back to the drives that are local to the SAN at the slower, local rate.
So, just like the cache on a local controller?
Does your local controller have a 256gb cache?
If I've got that much ram assigned to it, sure, why not?
I've worked on local storage systems that have that much cache just recently, in fact.
Nice. Was that a hardware controller or software based?
Software. Which is pretty much the only thing for enterprise systems.
I should've known. It's so easy to make and use a huge cache with md, more system memory = more cache.
-
What local storage approaches these days tend to skip the mammoth cache approach and instead go for insanely fast SSDs because of the lack of latency between the system and the disks. You can get millions of IOPS from local disks even before the cache. So often the cache is kept small only to absorb some writes rather than for read.
But having the cache is very possible, just not always practical.
-
@Grey said in Why are local drives better:
@DustinB3403 said in Why are local drives better:
@Grey said in Why are local drives better:
@DustinB3403 said in Why are local drives better:
@Grey said in Why are local drives better:
@DustinB3403 said in Why are local drives better:
@Grey While I agree, I agree for different reasoning.
The array protection, isn't something that I think needs to be thought of in the traditional sense. I do agree that the local drive needs to be excluded from anything but secure services. So ransomware etc couldn't mess it up.
Maybe we're thinking on different levels. Are you only talking about DAS or is there something else here?
Yea.... haha
sorry for being so vague, just trying to get some ideas. Ignore raid. Its not an item to consider.
So, this is a workstation?
It could be workstation, it could also be a server. Just looking for possible use cases of a locally attached disk.
The only reason is for speed, in my world. You could do an SSD DAS (array or not) and it will be faster than any NAS, until you get up to the level of a fiber network SAN that has more transfer speed than the SAS or SATA drives in question, which may max at 3, 6 or 3.2 16 Gbit/s (1969 MB/s). Ergo, a 10GB FCoE SAN that's running OBR10 or Raid6 with a large SSD/RAM cache could transfer larger files faster than DAS using SATA 3 which maxes out at 6GBit. The problem, obviously, is contention so you might never see those max speeds on the SAN in the real world production environment.
I guess it all depends on how you're going to use your DAS, and if it's just a machine with a single drive, that screams workstation (that you don't care about the data or uptime is implied by the lack of an array). If you want to make a faster workstation, get a pair of SSDs and run them in RAID0.
^^ speaking of (and hopefully not a tangent)
Anyone heard of a sata upgrade/replacement coming down the line?
-
@MattSpeller said in Why are local drives better:
@Grey said in Why are local drives better:
@DustinB3403 said in Why are local drives better:
@Grey said in Why are local drives better:
@DustinB3403 said in Why are local drives better:
@Grey said in Why are local drives better:
@DustinB3403 said in Why are local drives better:
@Grey While I agree, I agree for different reasoning.
The array protection, isn't something that I think needs to be thought of in the traditional sense. I do agree that the local drive needs to be excluded from anything but secure services. So ransomware etc couldn't mess it up.
Maybe we're thinking on different levels. Are you only talking about DAS or is there something else here?
Yea.... haha
sorry for being so vague, just trying to get some ideas. Ignore raid. Its not an item to consider.
So, this is a workstation?
It could be workstation, it could also be a server. Just looking for possible use cases of a locally attached disk.
The only reason is for speed, in my world. You could do an SSD DAS (array or not) and it will be faster than any NAS, until you get up to the level of a fiber network SAN that has more transfer speed than the SAS or SATA drives in question, which may max at 3, 6 or 3.2 16 Gbit/s (1969 MB/s). Ergo, a 10GB FCoE SAN that's running OBR10 or Raid6 with a large SSD/RAM cache could transfer larger files faster than DAS using SATA 3 which maxes out at 6GBit. The problem, obviously, is contention so you might never see those max speeds on the SAN in the real world production environment.
I guess it all depends on how you're going to use your DAS, and if it's just a machine with a single drive, that screams workstation (that you don't care about the data or uptime is implied by the lack of an array). If you want to make a faster workstation, get a pair of SSDs and run them in RAID0.
^^ speaking of (and hopefully not a tangent)
Anyone heard of a sata upgrade/replacement coming down the line?
According to the Wiki Pedia article, SAS4 will be hitting 22.5 Gbit/s this year, but SATA will still be stuck at 16 Gbit/s.
-
@MattSpeller said in Why are local drives better:
@Grey said in Why are local drives better:
@DustinB3403 said in Why are local drives better:
@Grey said in Why are local drives better:
@DustinB3403 said in Why are local drives better:
@Grey said in Why are local drives better:
@DustinB3403 said in Why are local drives better:
@Grey While I agree, I agree for different reasoning.
The array protection, isn't something that I think needs to be thought of in the traditional sense. I do agree that the local drive needs to be excluded from anything but secure services. So ransomware etc couldn't mess it up.
Maybe we're thinking on different levels. Are you only talking about DAS or is there something else here?
Yea.... haha
sorry for being so vague, just trying to get some ideas. Ignore raid. Its not an item to consider.
So, this is a workstation?
It could be workstation, it could also be a server. Just looking for possible use cases of a locally attached disk.
The only reason is for speed, in my world. You could do an SSD DAS (array or not) and it will be faster than any NAS, until you get up to the level of a fiber network SAN that has more transfer speed than the SAS or SATA drives in question, which may max at 3, 6 or 3.2 16 Gbit/s (1969 MB/s). Ergo, a 10GB FCoE SAN that's running OBR10 or Raid6 with a large SSD/RAM cache could transfer larger files faster than DAS using SATA 3 which maxes out at 6GBit. The problem, obviously, is contention so you might never see those max speeds on the SAN in the real world production environment.
I guess it all depends on how you're going to use your DAS, and if it's just a machine with a single drive, that screams workstation (that you don't care about the data or uptime is implied by the lack of an array). If you want to make a faster workstation, get a pair of SSDs and run them in RAID0.
^^ speaking of (and hopefully not a tangent)
Anyone heard of a sata upgrade/replacement coming down the line?
I thought M.2 already replaced it.
-
@scottalanmiller said in Why are local drives better:
@MattSpeller said in Why are local drives better:
@Grey said in Why are local drives better:
@DustinB3403 said in Why are local drives better:
@Grey said in Why are local drives better:
@DustinB3403 said in Why are local drives better:
@Grey said in Why are local drives better:
@DustinB3403 said in Why are local drives better:
@Grey While I agree, I agree for different reasoning.
The array protection, isn't something that I think needs to be thought of in the traditional sense. I do agree that the local drive needs to be excluded from anything but secure services. So ransomware etc couldn't mess it up.
Maybe we're thinking on different levels. Are you only talking about DAS or is there something else here?
Yea.... haha
sorry for being so vague, just trying to get some ideas. Ignore raid. Its not an item to consider.
So, this is a workstation?
It could be workstation, it could also be a server. Just looking for possible use cases of a locally attached disk.
The only reason is for speed, in my world. You could do an SSD DAS (array or not) and it will be faster than any NAS, until you get up to the level of a fiber network SAN that has more transfer speed than the SAS or SATA drives in question, which may max at 3, 6 or 3.2 16 Gbit/s (1969 MB/s). Ergo, a 10GB FCoE SAN that's running OBR10 or Raid6 with a large SSD/RAM cache could transfer larger files faster than DAS using SATA 3 which maxes out at 6GBit. The problem, obviously, is contention so you might never see those max speeds on the SAN in the real world production environment.
I guess it all depends on how you're going to use your DAS, and if it's just a machine with a single drive, that screams workstation (that you don't care about the data or uptime is implied by the lack of an array). If you want to make a faster workstation, get a pair of SSDs and run them in RAID0.
^^ speaking of (and hopefully not a tangent)
Anyone heard of a sata upgrade/replacement coming down the line?
I thought M.2 already replaced it.
That's the problem with the M.2 interface. Is it using SATA or PCIe? It can be either, you have to actually read documentation to figure it out.
-
@travisdh1 said in Why are local drives better:
@scottalanmiller said in Why are local drives better:
@travisdh1 said in Why are local drives better:
@scottalanmiller said in Why are local drives better:
@travisdh1 said in Why are local drives better:
@Grey said in Why are local drives better:
@travisdh1 said in Why are local drives better:
@Grey said in Why are local drives better:
@Dashrender said in Why are local drives better:
@Grey said in Why are local drives better:
ay max at 3, 6 or 3.2 16 Gbit/s (1969 MB/s). Ergo, a 10GB FCoE SAN that's running OBR10 or Raid6 with a large SSD/RAM c
You can literally use local disk for any thing you can use remote disk. So I'm not really sure what you're digging for.
Transfer rates. A local bus will max at 6, while a SAN on a 10 GB link (or dual 10s, whatever) can go higher as the node can cache in RAM and then write back to the drives that are local to the SAN at the slower, local rate.
So, just like the cache on a local controller?
Does your local controller have a 256gb cache?
If I've got that much ram assigned to it, sure, why not?
I've worked on local storage systems that have that much cache just recently, in fact.
Nice. Was that a hardware controller or software based?
Software. Which is pretty much the only thing for enterprise systems.
I should've known. It's so easy to make and use a huge cache with md, more system memory = more cache.
Using "md"? What do you mean? Linux automatically cache I/O with available ram AFAIK.
-
@Grey said in Why are local drives better:
@Dashrender said in Why are local drives better:
@Grey said in Why are local drives better:
ay max at 3, 6 or 3.2 16 Gbit/s (1969 MB/s). Ergo, a 10GB FCoE SAN that's running OBR10 or Raid6 with a large SSD/RAM c
You can literally use local disk for any thing you can use remote disk. So I'm not really sure what you're digging for.
Transfer rates. A local bus will max at 6, while a SAN on a 10 GB link (or dual 10s, whatever) can go higher as the node can cache in RAM and then write back to the drives that are local to the SAN at the slower, local rate.
I'm coming back to this thread after a while - so I see that Scott has already mentioned that you can get 10 Gb locally.
The thing that those of us that aren't Scott have to remember, NAS/SAN is always second to local. Anything those can do, local can do better. Scott's already given the reasons.
-
Less COST makes local drives better.
Less network traffic makes local drives better.
Less switch/jack/cable needed makes local drives better.
Less IP address/setup/maintenance/update/firmware makes local drives better.