Sizing a Server and Disks - SQL VM
-
@dashrender said in Sizing a Server and Disks - SQL VM:
@dustinb3403 said in Sizing a Server and Disks - SQL VM:
The OS still needs to assign a letter to use the drive. . .
Sure - but so?
Oh and that's not true. Windows has supported mount points for a while now. I know I did it as a test more than 5 years ago.. hell, maybe more than 10.
It's been around since Server 2012 IIRC. They didn't work well in 2012 but have been working really well in R2 and 2016.
-
@dustinb3403 said in Sizing a Server and Disks - SQL VM:
@dashrender hrm. . . I might need to do some digging on that.
-
I've never heard that, wow I feel bad now.
Good to know for the future.
-
We have a Hyper-V host with two tiers of storage: an all SSD RAID, and an all HDD RAID.
WHen I set up the MS SQL server (mainly for MS Dynamics purposes, but it also serves some other critical business functions), I had to do it according to what the Dynamics consultant suggested:
- MS SQL VM: (virtual disk (os drive letter))
- serv-SQL.vhdx (C:)
- serv-SQL-DATA.vhdx (D:)
- serv-SQL-LOG.vhdx (E:)
- serv-SQL-BACKUP.vhdx (F:)
The D and E virtual disks are located on the SSD RAID on the physical host, the other two are on the HDD RAID.
The stuff needing to be fast like the TempDB, Log, main DB, etc is all on SSD... while the backups and the OS are on the HDD RAID.
I do have SSD caching for the HDD RAID, so the other stuff is actually sped up, though not 100% of the time. Most writes are anyways.
- MS SQL VM: (virtual disk (os drive letter))
-
Sorry peeps got a bit crazy in work and busy with house stuff.
Will try and go through the thread in the morning and answer what I can.I will say the guide I linked doesn't seem to be geared to virtualization.
Tim_g is what I was thinking with the split arrays and separate vmdk
-
@hobbit666 said in Sizing a Server and Disks - SQL VM:
I will say the guide I linked doesn't seem to be geared to virtualization.
Wouldn't need to be. There is no case where you should have a physical database in a VERY long time, databases are among the first workloads to have gone 100% virtual. And even so, the storage considerations for a database are not impacted by physical vs. virtual, it's always the same.
-
@tim_g said in Sizing a Server and Disks - SQL VM:
We have a Hyper-V host with two tiers of storage: an all SSD RAID, and an all HDD RAID.
WHen I set up the MS SQL server (mainly for MS Dynamics purposes, but it also serves some other critical business functions), I had to do it according to what the Dynamics consultant suggested:
- MS SQL VM: (virtual disk (os drive letter))
- serv-SQL.vhdx (C:)
- serv-SQL-DATA.vhdx (D:)
- serv-SQL-LOG.vhdx (E:)
- serv-SQL-BACKUP.vhdx (F:)
The D and E virtual disks are located on the SSD RAID on the physical host, the other two are on the HDD RAID.
The stuff needing to be fast like the TempDB, Log, main DB, etc is all on SSD... while the backups and the OS are on the HDD RAID.
I do have SSD caching for the HDD RAID, so the other stuff is actually sped up, though not 100% of the time. Most writes are anyways.
A lot of that is primarily offset by RAM, anyway.
- MS SQL VM: (virtual disk (os drive letter))
-
@dustinb3403 said in Sizing a Server and Disks - SQL VM:
FYI nothing in your OP states the type of drives so we have to make an assumption based on the drawings.
But if you are using SSDs, unless you need some really insane IOPS, use OBR5, you get more storage and it is more than reliable enough.
If using HDDs use RAID10.
Obviously all of the conditions apply with both (RAID 5 ssd) don't use consumer gear, enable monitoring, replace equipment when it fails etc etc.
Well that's the thing. With the requirement of SQL is it better to go full SSD? If so we will price it up. If that's too many ££££ then we will look at split the array into two like @Tim_G has this setup.
-
@scottalanmiller said in Sizing a Server and Disks - SQL VM:
Separate VMDKs is never separate RAIDs. They are recommending different arrays for each.
They are wrong and this is ridiculously horrible guidance, but that is what they mean. What you are seeing is a 1990's guide regurgitated by someone non-technical who parroted back "rule of thumb" based on the assumption of using spinning disks, with RAID 5, without cache - basically, a run of the mill, physical, 1998 install.
Whatever guide this is, it's not for any product in the real world for nearly two decades.
You say that but the Document is dated 2017?
Problem I have (and this is not a dig at you, it's more what I observe from our MSP and others in the Dept) but without forums like this and people in the real world, how would we know this is bad??? Its a Microsoft document giving advice on their product.
So I now have to convince my manger and the board that what M$ are saying in their guide is wrong. -
@scottalanmiller said in Sizing a Server and Disks - SQL VM:
Definitely not. You should "never" partition today. If you want partitions, that means that you actually wanted volumes. Partitions are effectively a dead technology - an "after the fact" kludge that exists for cases where voluming wasn't an option - which should never be the case today as this is solved universally. Partitions are fragile and difficult to manage and have many fewer options and less flexibility. They have no benefits, which is why they are a dead technology.
Partitions exist today only for physical Windows installs, where there is no hypervisor and no enterprise volume manager to do the work - in essence, they are for "never".
But with the recommended setup for SQL in having separate drives (in windows) for Logs, TempDB, Backup etc the rule should be separate vmdk disks?
Like
vmdk1 = OS
vmdk2 = Logs
vmdk3 = TempDB -
@dashrender said in Sizing a Server and Disks - SQL VM:
So what the OP needs to do is get IOPs requirements of his environment, and build toward that.
This is where I hold my hands up. I have no idea when to measure this, as in what we use now, and how to calculate what we need and for future.
-
@hobbit666 said in Sizing a Server and Disks - SQL VM:
But with the recommended setup for SQL in having separate drives (in windows) for Logs, TempDB, Backup etc the rule should be separate vmdk disks?
Like
vmdk1 = OS
vmdk2 = Logs
vmdk3 = TempDBReading some of the later posts this might not be the case?
-
@tim_g said in Sizing a Server and Disks - SQL VM:
We have a Hyper-V host with two tiers of storage: an all SSD RAID, and an all HDD RAID.
WHen I set up the MS SQL server (mainly for MS Dynamics purposes, but it also serves some other critical business functions), I had to do it according to what the Dynamics consultant suggested:
- MS SQL VM: (virtual disk (os drive letter))
- serv-SQL.vhdx (C:)
- serv-SQL-DATA.vhdx (D:)
- serv-SQL-LOG.vhdx (E:)
- serv-SQL-BACKUP.vhdx (F:)
The D and E virtual disks are located on the SSD RAID on the physical host, the other two are on the HDD RAID.
The stuff needing to be fast like the TempDB, Log, main DB, etc is all on SSD... while the backups and the OS are on the HDD RAID.
I do have SSD caching for the HDD RAID, so the other stuff is actually sped up, though not 100% of the time. Most writes are anyways.
How have you got you disks split? Is it 50/50 SSD/Spinning?
- MS SQL VM: (virtual disk (os drive letter))
-
@hobbit666 said in Sizing a Server and Disks - SQL VM:
@dashrender said in Sizing a Server and Disks - SQL VM:
So what the OP needs to do is get IOPs requirements of his environment, and build toward that.
This is where I hold my hands up. I have no idea when to measure this, as in what we use now, and how to calculate what we need and for future.
DPACK is now easy to get and get the report.
That will measure your current IOP load. Your can make some guesswork from there for your growth. -
@hobbit666 said in Sizing a Server and Disks - SQL VM:
@tim_g said in Sizing a Server and Disks - SQL VM:
We have a Hyper-V host with two tiers of storage: an all SSD RAID, and an all HDD RAID.
WHen I set up the MS SQL server (mainly for MS Dynamics purposes, but it also serves some other critical business functions), I had to do it according to what the Dynamics consultant suggested:
- MS SQL VM: (virtual disk (os drive letter))
- serv-SQL.vhdx (C:)
- serv-SQL-DATA.vhdx (D:)
- serv-SQL-LOG.vhdx (E:)
- serv-SQL-BACKUP.vhdx (F:)
The D and E virtual disks are located on the SSD RAID on the physical host, the other two are on the HDD RAID.
The stuff needing to be fast like the TempDB, Log, main DB, etc is all on SSD... while the backups and the OS are on the HDD RAID.
I do have SSD caching for the HDD RAID, so the other stuff is actually sped up, though not 100% of the time. Most writes are anyways.
How have you got you disks split? Is it 50/50 SSD/Spinning?
The split should should be based upon storage and IOPs need.
- MS SQL VM: (virtual disk (os drive letter))
-
@hobbit666 said in Sizing a Server and Disks - SQL VM:
So I now have to convince my manger and the board that what M$ are saying in their guide is wrong.
This is simple, show your board and manager literally any other microsoft document. It is bound to contradict itself or other documents at least once.
The people who write for Microsoft are authors, not technical people in any way or shape. They often have no clue at all.
-
@hobbit666 said in Sizing a Server and Disks - SQL VM:
@dustinb3403 said in Sizing a Server and Disks - SQL VM:
FYI nothing in your OP states the type of drives so we have to make an assumption based on the drawings.
But if you are using SSDs, unless you need some really insane IOPS, use OBR5, you get more storage and it is more than reliable enough.
If using HDDs use RAID10.
Obviously all of the conditions apply with both (RAID 5 ssd) don't use consumer gear, enable monitoring, replace equipment when it fails etc etc.
Well that's the thing. With the requirement of SQL is it better to go full SSD? If so we will price it up. If that's too many ££££ then we will look at split the array into two like @Tim_G has this setup.
Measure what you need for IOPS, if it is more than you can get out of an all HDD OBR10 array, then yeah you'll have to split them.
Generally though you aren't going to need such a huge boost in performance. Otherwise you'd already know about it.
-
@hobbit666 said in Sizing a Server and Disks - SQL VM:
@dashrender said in Sizing a Server and Disks - SQL VM:
So what the OP needs to do is get IOPs requirements of his environment, and build toward that.
This is where I hold my hands up. I have no idea when to measure this, as in what we use now, and how to calculate what we need and for future.
Just run a Dell DPACK scan for 3 or 4 days against your servers. You don't want to measure something at just one specific time as you wouldn't get a real view of the results.
-
@hobbit666 said in Sizing a Server and Disks - SQL VM:
@hobbit666 said in Sizing a Server and Disks - SQL VM:
But with the recommended setup for SQL in having separate drives (in windows) for Logs, TempDB, Backup etc the rule should be separate vmdk disks?
Like
vmdk1 = OS
vmdk2 = Logs
vmdk3 = TempDBReading some of the later posts this might not be the case?
You would still create separate virtual disks and attach them to the VM. But you can use mount points instead of connecting the disks to the server as E: F: etc. . .
-
@dustinb3403 said in Sizing a Server and Disks - SQL VM:
@hobbit666 said in Sizing a Server and Disks - SQL VM:
@dashrender said in Sizing a Server and Disks - SQL VM:
So what the OP needs to do is get IOPs requirements of his environment, and build toward that.
This is where I hold my hands up. I have no idea when to measure this, as in what we use now, and how to calculate what we need and for future.
Just run a Dell DPACK scan for 3 or 4 days against your servers. You don't want to measure something at just one specific time as you wouldn't get a real view of the results.
The longer the better. For example, some companies have a process that only runs monthly, so if you're not running DPACK at that time, you could miss a high load time.