Solved Window server standard edition on Hyper V- means two Wins VMs ?
-
SAS and SATA are like English and French. Either you speak English or you speak French. That you wear a coat or a tie isn't a factor in the question of "what language do you speak."
-
@Dashrender said in Window server standard edition on Hyper V- means two Wins VMs ?:
Well, my server is old 8+ years, but it has NL-SATA in 2.5 in form factor.
ST9500530NS Seagate Constellation ST9500530NS 500GB 7200 RPM 32MB Cache SATA 3.0Gb/s 2.5" Internal Enterprise-class Hard Drive Bare DriveThe term "NL-SATA" never appears in that drive information: https://www.seagate.com/staticfiles/support/docs/manual/enterprise/Constellation 2_5 in/100538694d.pdf
It's a SATA drive, that's all. I can only assume that the "NL" is something that you added somewhere by accident?
-
@scottalanmiller said in Window server standard edition on Hyper V- means two Wins VMs ?:
@Dashrender said in Window server standard edition on Hyper V- means two Wins VMs ?:
Well, my server is old 8+ years, but it has NL-SATA in 2.5 in form factor.
ST9500530NS Seagate Constellation ST9500530NS 500GB 7200 RPM 32MB Cache SATA 3.0Gb/s 2.5" Internal Enterprise-class Hard Drive Bare DriveThe term "NL-SATA" never appears in that drive information: https://www.seagate.com/staticfiles/support/docs/manual/enterprise/Constellation 2_5 in/100538694d.pdf
It's a SATA drive, that's all. I can only assume that the "NL" is something that you added somewhere by accident?
You're right, it doesn't appear that - It did appear in the IBM documentation when I bought it, which has all been sold to Lenovo no.. but meh.
-
@Dashrender said in Window server standard edition on Hyper V- means two Wins VMs ?:
@scottalanmiller said in Window server standard edition on Hyper V- means two Wins VMs ?:
@Dashrender said in Window server standard edition on Hyper V- means two Wins VMs ?:
Well, my server is old 8+ years, but it has NL-SATA in 2.5 in form factor.
ST9500530NS Seagate Constellation ST9500530NS 500GB 7200 RPM 32MB Cache SATA 3.0Gb/s 2.5" Internal Enterprise-class Hard Drive Bare DriveThe term "NL-SATA" never appears in that drive information: https://www.seagate.com/staticfiles/support/docs/manual/enterprise/Constellation 2_5 in/100538694d.pdf
It's a SATA drive, that's all. I can only assume that the "NL" is something that you added somewhere by accident?
You're right, it doesn't appear that - It did appear in the IBM documentation when I bought it, which has all been sold to Lenovo no.. but meh.
If you search online, it's really clear it's just a typo that a couple vendors made in casual pages here and there. Easy to do when you have NL-SAS and SATA crossover models, but it's just a typo. It's not a thing and wouldn't mean anything.
-
@scottalanmiller said in Window server standard edition on Hyper V- means two Wins VMs ?:
@Dashrender said in Window server standard edition on Hyper V- means two Wins VMs ?:
@scottalanmiller said in Window server standard edition on Hyper V- means two Wins VMs ?:
@Dashrender said in Window server standard edition on Hyper V- means two Wins VMs ?:
Well, my server is old 8+ years, but it has NL-SATA in 2.5 in form factor.
ST9500530NS Seagate Constellation ST9500530NS 500GB 7200 RPM 32MB Cache SATA 3.0Gb/s 2.5" Internal Enterprise-class Hard Drive Bare DriveThe term "NL-SATA" never appears in that drive information: https://www.seagate.com/staticfiles/support/docs/manual/enterprise/Constellation 2_5 in/100538694d.pdf
It's a SATA drive, that's all. I can only assume that the "NL" is something that you added somewhere by accident?
You're right, it doesn't appear that - It did appear in the IBM documentation when I bought it, which has all been sold to Lenovo no.. but meh.
If you search online, it's really clear it's just a typo that a couple vendors made in casual pages here and there. Easy to do when you have NL-SAS and SATA crossover models, but it's just a typo. It's not a thing and wouldn't mean anything.
OK - Great.. But we're right back to the - my server supports both SAS and SATA... and if Pete is saying is correct ,there's not really a difference anymore.
-
@Dashrender said in Window server standard edition on Hyper V- means two Wins VMs ?:
my server supports both SAS and SATA... and if Pete is saying is correct ,there's not really a difference anymore.
In performance is all that he's saying. And only when in a server. He's still agreeing that there is a big difference when they are stand-alone. SAS still has features that SATA doesn't beyond performance.
-
Constellation was enterprise drives, regardless of the interface. 500GB in a 2.5" drive was high capacity at the time. Constellation ES was the 3.5" drives and they were up to 2TB I think. Both were available in SATA and SAS.
"Nearline" (near online) is an old term from the tape era but reused for marketing hard drives so enterprise IT could understand the difference between a fast drive with low capacity (for SQL server etc) and a slow drive with high capacity (for backups etc).
-
@scottalanmiller said in Window server standard edition on Hyper V- means two Wins VMs ?:
@Dashrender said in Window server standard edition on Hyper V- means two Wins VMs ?:
my server supports both SAS and SATA... and if Pete is saying is correct ,there's not really a difference anymore.
In performance is all that he's saying. And only when in a server. He's still agreeing that there is a big difference when they are stand-alone. SAS still has features that SATA doesn't beyond performance.
Yes.
-
@Dashrender said in Window server standard edition on Hyper V- means two Wins VMs ?:
and if Pete is saying is correct ,there's not really a difference anymore.
Yellow Bricks, generally considered one of the most important storage knowledge sources, still says that SAS queue depth is very important in RAID performance...
http://www.yellow-bricks.com/2014/06/09/queue-depth-matters/
-
@scottalanmiller said in Window server standard edition on Hyper V- means two Wins VMs ?:
@Dashrender said in Window server standard edition on Hyper V- means two Wins VMs ?:
and if Pete is saying is correct ,there's not really a difference anymore.
Yellow Bricks, generally considered one of the most important storage knowledge sources, still says that SAS queue depth is very important in RAID performance...
http://www.yellow-bricks.com/2014/06/09/queue-depth-matters/
It did not really say that. The blog post author said 6 years ago that controller command queue depth was important (another thing) and that SAS drives is the way to go because we'd have to imagine that SATA command depth can become a "choking point".
Well, I'm sure it can but I don't have to imagine. The same model 16TB SATA and SAS drive has the same maximum IOPS for random read/write operations. Hence the queue depth of SATA is obviously enough on that HDD or the IOPS would be higher on the SAS drive.
But I agree with the blog that if the price is roughly the same, you might as well buy the SAS drive - if your system can handle it.
-
@Pete-S said in Window server standard edition on Hyper V- means two Wins VMs ?:
Hence the queue depth of SATA is obviously enough on that HDD or the IOPS would be higher on the SAS drive.
Measured IOPS is contrived and bypasses the need for a queue, so doesn't tell us actual load performances, though. It doesn't imply anything with the queues. A max IOPS is always under a condition under which the queue is pre-managed.
-
@Pete-S said in Window server standard edition on Hyper V- means two Wins VMs ?:
The blog post author said 6 years ago that controller command queue depth was important (another thing) and that SAS drives is the way to go because we'd have to imagine that SATA command depth can become a "choking point".
Well that it is six years ago doesn't affect anything, no change to technology. If it was true then, it would be true now.
The queue at the controller is a little different than the queue at the drives. So we'd expect that the drive queue still matters.
It would be an interesting thing to measure, but you'd have to have several drives of both SATA and SAS that are otherwise identical and a good test system. Not a cheap thing to test, sadly.
-
So this is suggesting...
https://www.ibm.com/support/knowledgecenter/P8DEA/p8ebk/p7ebkdrivequeuedepth.htm
IBM has a utility and guide for tuning the queue depth on individual drives under hardware RAID to adjust throughput vs. latency. This implies pretty heavily that the queue does impact performance and that tweaking it as needed does matter. If it didn't matter they'd be expected to ignore it, or to just set it to the minimum as it has no purpose. But that they tell you how to tune it for different kinds of performance means that they are either faking it, or it really matters
-
Here is an interesting theory as to why SSDs need longer queue depths in the drives, rather than in the controllers...
"Flash is so fast that the latency between responding to a request and getting a new request is a significant part of the process. With a hard disk that reponse/request latency is insignificant compared to rotational delay and head seek time. But with an SSD that waiting is a much larger percentage. Having a queue of requests gives the SSD more work to do after it responds to earlier work."
Just a random poster's opinion, but an interesting thought.
-
Queue depth is the same as the maximum number of outstanding I/O operations - a todo list for the drive.
On that todo list the drive can do some of tasks together because he is in the neighborhood so to speak. And he can reorder the tasks so it's not that far to drive. So he can complete more tasks in the same time.
An SATA drive has 32 items on the todo list. With a miximum IOPS of 200 every task takes an average of 1/200 sec to complete.
If there is a new task coming in, for instance read at sector xyz, then worst case scenario means that it takes 32/200=160 ms before the drive will start on the new task because the todo list is already full. On average perhaps the task ends up in being done somewhere in the middle and the average latency then becomes 80 ms. That is a really long time and that's why you don't want a deeper command queue than necessary. But if it's not deep enough then the drive can't do as much work as it could with a longer queue, because it can't reorganize the work as much. That's why queue depth has to be optimized for the workload - if you want to squeeze out maximum performance.
But as you increase the depth of the command queue (QD) besides increasing latency you also get a diminishing return. A slow drive is still slow and it will still take a long time - no matter what. For most workloads on mechanical drives that is around the QD16 to QD32. Which just happens to be what an SATA interface can handle.
If you have a faster drive on the other hand, it will complete each item on the todo-list in a much shorter time. Say for instance an SSD with an IOPS of 50 000. Then on average each task takes 20 microsecs and you would clear the entire todo-list with 32 items in 0.64 ms. Now it would makes sense to have a deeper command queue and SAS is an improvement with it's around 256 deep command queue.
But the real improvement is with NVMe. There you have 64K queues (not just one) and each queue can be 64K deep. Because the queues are so deep and the drive is so fast the I/O scheduler in the OS is turned off for maximum performance. Otherwise that would also be in effect for scheduling I/O operations. Now the latency of the kernel also becomes an issue and basically everything else. With an read IOPS of 600 000 for instance the drive will complete a read operation on average in under 2 microseconds.
Besides all this you have to take the workload in account. How much I/O do you actually have? For the limited 32 deep SATA command queue to be able to be improved with a deeper queue by using an SAS disk, you have to have a workload that generates enough I/O operations for this to happen. A file server is unlikely to do this unless you have some special application writing to it constantly. A heavily loaded SQL database could be another matter. But you wouldn't be running that on spinning disks today.
-
@scottalanmiller said in Window server standard edition on Hyper V- means two Wins VMs ?:
Well that it is six years ago doesn't affect anything, no change to technology. If it was true then, it would be true now.
"Imagine" and "can" that was expressed as an opinion 6 years ago doesn't mean anything because there were no proof to back anything up. There were no written assumptions as to what scenario this could happen. Were the author talking about SSDs for instance? Or maybe he was assuming based on test with consumer SATA versus enterprise SSDs.
Technology wise what has happened is that drives are faster today and the electronics inside the drive as well, higher densities, higher capacities and the computers and the controller cards are faster as well. And flash has killed 2.5" spinning rust completely.
-
@Pete-S said in Window server standard edition on Hyper V- means two Wins VMs ?:
Technology wise what has happened is that drives are faster today and the electronics inside the drive as well, higher densities, higher capacities and the computers and the controller cards are faster as well.
This isn't really true. Drives aren't really faster today, spinners on average are actually slower. No advancement has been made in top end speed, and averages have fallen.
Densities are higher, but not by much. Computers and controllers really haven't gotten much faster. Most parallelism but that doesn't really benefit the storage layer.
Really, six years ago until today, the landscape is all but identical in reality.
-
@Pete-S said in Window server standard edition on Hyper V- means two Wins VMs ?:
And flash has killed 2.5" spinning rust completely.
Not really. Neither in servers nor in end user devices. For us IT people, we might feel that way. But in the real world, it's a struggle still to get clients to buy SSDs and the average machines we see are still about what they were.
-
@scottalanmiller said in Window server standard edition on Hyper V- means two Wins VMs ?:
@Pete-S said in Window server standard edition on Hyper V- means two Wins VMs ?:
And flash has killed 2.5" spinning rust completely.
Not really. Neither in servers nor in end user devices. For us IT people, we might feel that way. But in the real world, it's a struggle still to get clients to buy SSDs and the average machines we see are still about what they were.
Agreed - the cost of SSD is still massive compared to that of HDD MB for MB, and if you aren't loading a single host up with 80+ VMs you likely don't need the performance of SSD for your SMB. Of course if you do, then it's pretty likely you also have the budget for full on SSD arrays, etc.
-
@openit said in Window server standard edition on Hyper V- means two Wins VMs ?:
I don't want to go with Hyper-V as base, I was using earlier for other thing, I had issues of connecting to server with Hyper-V Manager on windows 10, it was working and after few days different issues.
For this i use 5nine Manager, works every time for me with Hyper-V hosts.