HyperV Partitioning
-
Ah you're looking for a walkthrough, not advice, sorry I missed that part.
It should be pretty straight forward, as you install right from the Hyper-V installer just like regular windows. Is there a specific thing you're having trouble with?
-
@joel said in HyperV Partitioning:
This is how their tech team have requested it be configured.
Their tech team doesn't understand the benefits of OBR10 then and why splitting arrays like this was never a good thing.
As for the partitioning, this is something you'd have to do at the raid controller, assuming it support this.
-
So I can create the VM's no problem.
But i'm not so clear on how I give the VMs a C:\ drive from part of the R1 Volume. I think i'm over confusing myself how to get it done best way -
@joel said in HyperV Partitioning:
So I can create the VM's no problem.
But i'm not so clear on how I give the VMs a C:\ drive from part of the R1 Volume. I think i'm over confusing myself how to get it done best wayYou would simply create a new disk and designate it as coming from that array (on that VM). On XenServer or ESXi you can specify what storage to use when creating disks, I'm almost 100% positive that this is doable on Hyper-V as well.
-
https://www.altaro.com/hyper-v/hyper-v-small-business-sample-host-builds/
I’ve only setup Hyper-V in one way. One big raid 10 and then create two partitions from the hypervisor.
-
When you create the VM in HyperV Manager, it asks you where you want to create the .vhdx disk for the VM... choose the C drive of the host. Then after the VM is created, you can create another disk for the VM on the RAID10 in the VM settings.
As for what you are doing, your tech team is almost 100% wrong and they don't understand virtualization. You need to send them here.
-
@tim_g said in HyperV Partitioning:
When you create the VM in HyperV Manager, it asks you where you want to create the .vhdx disk for the VM... choose the C drive of the host. Then after the VM is created, you can create another disk for the VM on the RAID10 in the VM settings.
As for what you are doing, your tech team is almost 100% wrong and they don't understand virtualization. You need to send them here.
I'm specifically talking with @Joel in PM, unfortunately he is outsourced IT in this case, and the client wants this system yesterday.
He understands what is wrong, and I've guided him to get things to be inline with what we'd do.
-
Your arrays should show up in hyper-v with drive letters, just like in normal windows. Like @Tim_G said, just pick what folder you want to store the VHDs in
-
@dustinb3403 said in HyperV Partitioning:
@joel said in HyperV Partitioning:
This is how their tech team have requested it be configured.
Their tech team doesn't understand the benefits of OBR10 then and why splitting arrays like this was never a good thing.
As for the partitioning, this is something you'd have to do at the raid controller, assuming it support this.
2x1 in R1 = 1TB usable
4x2 in R10 = 4TB usable
Total = 5TB usable.OBR10, all disks would drop to the smallest size available, so those 2TB can only be 1TB, right?
So in OBR10, that's now really only 6 x 1 TB, which is 3 TB usable.Perhaps they understand OBR10, but can only use their disks they have, and need more than 3TB.
Splitting is not a good thing, but if that's all they have, well... its all they have.
-
@jimmy9008 said in HyperV Partitioning:
@dustinb3403 said in HyperV Partitioning:
@joel said in HyperV Partitioning:
This is how their tech team have requested it be configured.
Their tech team doesn't understand the benefits of OBR10 then and why splitting arrays like this was never a good thing.
As for the partitioning, this is something you'd have to do at the raid controller, assuming it support this.
2x1 in R1 = 1TB usable
4x2 in R10 = 4TB usable
Total = 5TB usable.OBR10, all disks would drop to the smallest size available, so those 2TB can only be 1TB, right?
So in OBR10, that's now really only 6 x 1 TB, which is 3 TB usable.Perhaps they understand OBR10, but can only use their disks they have, and need more than 3TB.
Splitting is not a good thing, but if that's all they have, well... its all they have.
You would instead never have purchased the 1TB drive, and had gotten two more 2TB drives to make a 6-drive RAID10 for a total of 6TB usable. 2TB spinning rust is pretty close the same cost as 1TB.
-
@tim_g said in HyperV Partitioning:
@jimmy9008 said in HyperV Partitioning:
@dustinb3403 said in HyperV Partitioning:
@joel said in HyperV Partitioning:
This is how their tech team have requested it be configured.
Their tech team doesn't understand the benefits of OBR10 then and why splitting arrays like this was never a good thing.
As for the partitioning, this is something you'd have to do at the raid controller, assuming it support this.
2x1 in R1 = 1TB usable
4x2 in R10 = 4TB usable
Total = 5TB usable.OBR10, all disks would drop to the smallest size available, so those 2TB can only be 1TB, right?
So in OBR10, that's now really only 6 x 1 TB, which is 3 TB usable.Perhaps they understand OBR10, but can only use their disks they have, and need more than 3TB.
Splitting is not a good thing, but if that's all they have, well... its all they have.
You would instead never have purchased the 1TB drive, and had gotten two more 2TB drives to make a 6-drive RAID10 for a total of 6TB usable. 2TB spinning rust is pretty close the same cost as 1TB.
To further expand on this, you would, when installing HyperV Server, create your partitions then. A 60GB C partition, and the D partition for the remaining space.
-
@tim_g said in HyperV Partitioning:
@tim_g said in HyperV Partitioning:
@jimmy9008 said in HyperV Partitioning:
@dustinb3403 said in HyperV Partitioning:
@joel said in HyperV Partitioning:
This is how their tech team have requested it be configured.
Their tech team doesn't understand the benefits of OBR10 then and why splitting arrays like this was never a good thing.
As for the partitioning, this is something you'd have to do at the raid controller, assuming it support this.
2x1 in R1 = 1TB usable
4x2 in R10 = 4TB usable
Total = 5TB usable.OBR10, all disks would drop to the smallest size available, so those 2TB can only be 1TB, right?
So in OBR10, that's now really only 6 x 1 TB, which is 3 TB usable.Perhaps they understand OBR10, but can only use their disks they have, and need more than 3TB.
Splitting is not a good thing, but if that's all they have, well... its all they have.
You would instead never have purchased the 1TB drive, and had gotten two more 2TB drives to make a 6-drive RAID10 for a total of 6TB usable. 2TB spinning rust is pretty close the same cost as 1TB.
To further expand on this, you would, when installing HyperV Server, create your partitions then. A 60GB C partition, and the D partition for the remaining space.
You missed my point.
-
@dustinb3403 said in HyperV Partitioning:
Their tech team doesn't understand the benefits of OBR10 then and why splitting arrays like this was never a good thing.
Running VM's on the same partition as your hypervisor and having noisy neighbor issues impact the hypervisors ability to perform can cause interesting race conditions. Now if your hypervisor is embedded (can run from RAM once loaded) this isn't a big deal, but in the case of Hyper-V (That has a god awful huge footprint) I wouldn't call this a bad idea.
-
@storageninja said in HyperV Partitioning:
@dustinb3403 said in HyperV Partitioning:
Their tech team doesn't understand the benefits of OBR10 then and why splitting arrays like this was never a good thing.
Running VM's on the same partition as your hypervisor and having noisy neighbor issues impact the hypervisors ability to perform can cause interesting race conditions. Now if your hypervisor is embedded (can run from RAM once loaded) this isn't a big deal, but in the case of Hyper-V (That has a god awful huge footprint) I wouldn't call this a bad idea.
So there's no misunderstand, I'm using the terms "above" and "below" as in, hardware is at the bottom, and VMs are at the top.
In Hyper-V, the hypervisor (Ring -1 (minus one)) runs below the Windows kernel (Ring 0). Hyper-V needs higher privilege than Ring 0, and needs dedicated access to the hardware. So it goes Ring 3 (VMs) --> Ring 0 (Kernel Mode (VM BUS, VSP, Drivers)) --> Ring -1 (hypervisor (hyper-v)) --> Physical hardware.
Ring -1 (the hyper-v hypervisor) sits below the Windows Kernel, controlling all access to physical components.
Windows Server runs on top, and every VM runs beside it. The only thing Windows Server can do, is manage the VMs using various components.
The Hyper-V Hypervisor is only 20 MB. It runs in memory. Not sure what you mean by "god awful footprint"?
To say you can run a VM on the same partition as the hypervisor is wrong. You can't do it.
Nobody is suggesting to stash a VM on the same partition as the hypervisor. What we are saying is to have one big RAID 10, with multiple partitions on it. And if one VM is that busy it's slowing down the rest... then that needs to be addressed separately. Nothing liek that was mentioned.
This disk race condition is hypervisor agnostic, and happens between two or more VMs if one is too noisy.
If you have a super busy hi disk I/O VM running on the same physical disk as another VM, it's going to slow down the other VM for sure unless you enable QoS.
-
@tim_g said in HyperV Partitioning:
Windows Server runs on top, and every VM runs beside it. The only thing Windows Server can do, is manage the VMs using various components.
The Hyper-V Hypervisor is only 20 MB. It runs in memory. Not sure what you mean by "god awful footprint"?If that VM is a pure control plane, then I can reboot or patch it without impacting network or storage IO on the other VMs in the same way I can restart management agents on ESXi or KVM right? If Hyper-V is handling the network and storage traffic 100% then surely it must have its own driver stack, and not be dependent on the management VM for these functions, right?
Unless this has changed, you lost every VM on a host from a simple reboot of the management VM previously. -
@tim_g said in HyperV Partitioning:
This disk race condition is hypervisor agnostic, and happens between two or more VMs if one is too noisy.
The race condition happens because of IO components running on top of the lower level, and if they loose communication with the schedule you get a race condition (this is arguably 10x worse on VSA systems though). This is far more of an issue in systems that have IO pass through VM's, than ones where the IO/Networking driver stack is 100% in the hypervisor.
-
@storageninja said in HyperV Partitioning:
@tim_g said in HyperV Partitioning:
Windows Server runs on top, and every VM runs beside it. The only thing Windows Server can do, is manage the VMs using various components.
The Hyper-V Hypervisor is only 20 MB. It runs in memory. Not sure what you mean by "god awful footprint"?If that VM is a pure control plane, then I can reboot or patch it without impacting network or storage IO on the other VMs in the same way I can restart management agents on ESXi or KVM right? If Hyper-V is handling the network and storage traffic 100% then surely it must have its own driver stack, and not be dependent on the management VM for these functions, right?
Unless this has changed, you lost every VM on a host from a simple reboot of the management VM previously.You expect Microsoft to do things rationally, or correctly? That'd be a nice change of pace.
-
@travisdh1 said in HyperV Partitioning:
You expect Microsoft to do things rationally, or correctly? That'd be a nice change of pace.
My point is that things in the IO path go through that VM. They didn't want to write a full IO driver stack for Hyper-V so they have the VM for that. Compute/Memory doesn't go through it (that I know of), but network and disk IO do. (Otherwise Perfmon wouldn't work as a monitoring solution on the host).
AFAIK only ESXi uses a microkernel that has a fully isolated management agent plane (It's actually just a busybox shell).
-
@travisdh1 said in HyperV Partitioning:
@storageninja said in HyperV Partitioning:
@tim_g said in HyperV Partitioning:
Windows Server runs on top, and every VM runs beside it. The only thing Windows Server can do, is manage the VMs using various components.
The Hyper-V Hypervisor is only 20 MB. It runs in memory. Not sure what you mean by "god awful footprint"?If that VM is a pure control plane, then I can reboot or patch it without impacting network or storage IO on the other VMs in the same way I can restart management agents on ESXi or KVM right? If Hyper-V is handling the network and storage traffic 100% then surely it must have its own driver stack, and not be dependent on the management VM for these functions, right?
Unless this has changed, you lost every VM on a host from a simple reboot of the management VM previously.You expect Microsoft to do things rationally, or correctly? That'd be a nice change of pace.
XenServer does the same thing too.
-
@black3dynamite said in HyperV Partitioning:
@travisdh1 said in HyperV Partitioning:
@storageninja said in HyperV Partitioning:
@tim_g said in HyperV Partitioning:
Windows Server runs on top, and every VM runs beside it. The only thing Windows Server can do, is manage the VMs using various components.
The Hyper-V Hypervisor is only 20 MB. It runs in memory. Not sure what you mean by "god awful footprint"?If that VM is a pure control plane, then I can reboot or patch it without impacting network or storage IO on the other VMs in the same way I can restart management agents on ESXi or KVM right? If Hyper-V is handling the network and storage traffic 100% then surely it must have its own driver stack, and not be dependent on the management VM for these functions, right?
Unless this has changed, you lost every VM on a host from a simple reboot of the management VM previously.You expect Microsoft to do things rationally, or correctly? That'd be a nice change of pace.
XenServer does the same thing too.
Where did I claim that anyone gets it right? Looks to me like only ESXi get's it with this particular issue.