ML
    • Recent
    • Categories
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    HyperV Partitioning

    IT Discussion
    8
    25
    2.4k
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • B
      bnrstnr
      last edited by

      Your arrays should show up in hyper-v with drive letters, just like in normal windows. Like @Tim_G said, just pick what folder you want to store the VHDs in

      1 Reply Last reply Reply Quote 0
      • J
        Jimmy9008 @DustinB3403
        last edited by

        @dustinb3403 said in HyperV Partitioning:

        @joel said in HyperV Partitioning:

        This is how their tech team have requested it be configured.

        Their tech team doesn't understand the benefits of OBR10 then and why splitting arrays like this was never a good thing.

        As for the partitioning, this is something you'd have to do at the raid controller, assuming it support this.

        2x1 in R1 = 1TB usable
        4x2 in R10 = 4TB usable
        Total = 5TB usable.

        OBR10, all disks would drop to the smallest size available, so those 2TB can only be 1TB, right?
        So in OBR10, that's now really only 6 x 1 TB, which is 3 TB usable.

        Perhaps they understand OBR10, but can only use their disks they have, and need more than 3TB.

        Splitting is not a good thing, but if that's all they have, well... its all they have.

        O 1 Reply Last reply Reply Quote 0
        • O
          Obsolesce @Jimmy9008
          last edited by

          @jimmy9008 said in HyperV Partitioning:

          @dustinb3403 said in HyperV Partitioning:

          @joel said in HyperV Partitioning:

          This is how their tech team have requested it be configured.

          Their tech team doesn't understand the benefits of OBR10 then and why splitting arrays like this was never a good thing.

          As for the partitioning, this is something you'd have to do at the raid controller, assuming it support this.

          2x1 in R1 = 1TB usable
          4x2 in R10 = 4TB usable
          Total = 5TB usable.

          OBR10, all disks would drop to the smallest size available, so those 2TB can only be 1TB, right?
          So in OBR10, that's now really only 6 x 1 TB, which is 3 TB usable.

          Perhaps they understand OBR10, but can only use their disks they have, and need more than 3TB.

          Splitting is not a good thing, but if that's all they have, well... its all they have.

          You would instead never have purchased the 1TB drive, and had gotten two more 2TB drives to make a 6-drive RAID10 for a total of 6TB usable. 2TB spinning rust is pretty close the same cost as 1TB.

          O 1 Reply Last reply Reply Quote 1
          • O
            Obsolesce @Obsolesce
            last edited by Obsolesce

            @tim_g said in HyperV Partitioning:

            @jimmy9008 said in HyperV Partitioning:

            @dustinb3403 said in HyperV Partitioning:

            @joel said in HyperV Partitioning:

            This is how their tech team have requested it be configured.

            Their tech team doesn't understand the benefits of OBR10 then and why splitting arrays like this was never a good thing.

            As for the partitioning, this is something you'd have to do at the raid controller, assuming it support this.

            2x1 in R1 = 1TB usable
            4x2 in R10 = 4TB usable
            Total = 5TB usable.

            OBR10, all disks would drop to the smallest size available, so those 2TB can only be 1TB, right?
            So in OBR10, that's now really only 6 x 1 TB, which is 3 TB usable.

            Perhaps they understand OBR10, but can only use their disks they have, and need more than 3TB.

            Splitting is not a good thing, but if that's all they have, well... its all they have.

            You would instead never have purchased the 1TB drive, and had gotten two more 2TB drives to make a 6-drive RAID10 for a total of 6TB usable. 2TB spinning rust is pretty close the same cost as 1TB.

            To further expand on this, you would, when installing HyperV Server, create your partitions then. A 60GB C partition, and the D partition for the remaining space.

            J 1 Reply Last reply Reply Quote 1
            • J
              Jimmy9008 @Obsolesce
              last edited by

              @tim_g said in HyperV Partitioning:

              @tim_g said in HyperV Partitioning:

              @jimmy9008 said in HyperV Partitioning:

              @dustinb3403 said in HyperV Partitioning:

              @joel said in HyperV Partitioning:

              This is how their tech team have requested it be configured.

              Their tech team doesn't understand the benefits of OBR10 then and why splitting arrays like this was never a good thing.

              As for the partitioning, this is something you'd have to do at the raid controller, assuming it support this.

              2x1 in R1 = 1TB usable
              4x2 in R10 = 4TB usable
              Total = 5TB usable.

              OBR10, all disks would drop to the smallest size available, so those 2TB can only be 1TB, right?
              So in OBR10, that's now really only 6 x 1 TB, which is 3 TB usable.

              Perhaps they understand OBR10, but can only use their disks they have, and need more than 3TB.

              Splitting is not a good thing, but if that's all they have, well... its all they have.

              You would instead never have purchased the 1TB drive, and had gotten two more 2TB drives to make a 6-drive RAID10 for a total of 6TB usable. 2TB spinning rust is pretty close the same cost as 1TB.

              To further expand on this, you would, when installing HyperV Server, create your partitions then. A 60GB C partition, and the D partition for the remaining space.

              You missed my point.

              1 Reply Last reply Reply Quote 0
              • S
                StorageNinja Vendor @DustinB3403
                last edited by

                @dustinb3403 said in HyperV Partitioning:

                Their tech team doesn't understand the benefits of OBR10 then and why splitting arrays like this was never a good thing.

                Running VM's on the same partition as your hypervisor and having noisy neighbor issues impact the hypervisors ability to perform can cause interesting race conditions. Now if your hypervisor is embedded (can run from RAM once loaded) this isn't a big deal, but in the case of Hyper-V (That has a god awful huge footprint) I wouldn't call this a bad idea.

                O 1 Reply Last reply Reply Quote 0
                • O
                  Obsolesce @StorageNinja
                  last edited by Obsolesce

                  @storageninja said in HyperV Partitioning:

                  @dustinb3403 said in HyperV Partitioning:

                  Their tech team doesn't understand the benefits of OBR10 then and why splitting arrays like this was never a good thing.

                  Running VM's on the same partition as your hypervisor and having noisy neighbor issues impact the hypervisors ability to perform can cause interesting race conditions. Now if your hypervisor is embedded (can run from RAM once loaded) this isn't a big deal, but in the case of Hyper-V (That has a god awful huge footprint) I wouldn't call this a bad idea.

                  So there's no misunderstand, I'm using the terms "above" and "below" as in, hardware is at the bottom, and VMs are at the top.

                  In Hyper-V, the hypervisor (Ring -1 (minus one)) runs below the Windows kernel (Ring 0). Hyper-V needs higher privilege than Ring 0, and needs dedicated access to the hardware. So it goes Ring 3 (VMs) --> Ring 0 (Kernel Mode (VM BUS, VSP, Drivers)) --> Ring -1 (hypervisor (hyper-v)) --> Physical hardware.

                  Ring -1 (the hyper-v hypervisor) sits below the Windows Kernel, controlling all access to physical components.

                  Windows Server runs on top, and every VM runs beside it. The only thing Windows Server can do, is manage the VMs using various components.

                  The Hyper-V Hypervisor is only 20 MB. It runs in memory. Not sure what you mean by "god awful footprint"?

                  To say you can run a VM on the same partition as the hypervisor is wrong. You can't do it.

                  Nobody is suggesting to stash a VM on the same partition as the hypervisor. What we are saying is to have one big RAID 10, with multiple partitions on it. And if one VM is that busy it's slowing down the rest... then that needs to be addressed separately. Nothing liek that was mentioned.

                  This disk race condition is hypervisor agnostic, and happens between two or more VMs if one is too noisy.

                  If you have a super busy hi disk I/O VM running on the same physical disk as another VM, it's going to slow down the other VM for sure unless you enable QoS.

                  S 2 Replies Last reply Reply Quote 0
                  • S
                    StorageNinja Vendor @Obsolesce
                    last edited by

                    @tim_g said in HyperV Partitioning:

                    Windows Server runs on top, and every VM runs beside it. The only thing Windows Server can do, is manage the VMs using various components.
                    The Hyper-V Hypervisor is only 20 MB. It runs in memory. Not sure what you mean by "god awful footprint"?

                    If that VM is a pure control plane, then I can reboot or patch it without impacting network or storage IO on the other VMs in the same way I can restart management agents on ESXi or KVM right? If Hyper-V is handling the network and storage traffic 100% then surely it must have its own driver stack, and not be dependent on the management VM for these functions, right?
                    Unless this has changed, you lost every VM on a host from a simple reboot of the management VM previously.

                    travisdh1T 1 Reply Last reply Reply Quote 0
                    • S
                      StorageNinja Vendor @Obsolesce
                      last edited by

                      @tim_g said in HyperV Partitioning:

                      This disk race condition is hypervisor agnostic, and happens between two or more VMs if one is too noisy.

                      The race condition happens because of IO components running on top of the lower level, and if they loose communication with the schedule you get a race condition (this is arguably 10x worse on VSA systems though). This is far more of an issue in systems that have IO pass through VM's, than ones where the IO/Networking driver stack is 100% in the hypervisor.

                      1 Reply Last reply Reply Quote 0
                      • travisdh1T
                        travisdh1 @StorageNinja
                        last edited by

                        @storageninja said in HyperV Partitioning:

                        @tim_g said in HyperV Partitioning:

                        Windows Server runs on top, and every VM runs beside it. The only thing Windows Server can do, is manage the VMs using various components.
                        The Hyper-V Hypervisor is only 20 MB. It runs in memory. Not sure what you mean by "god awful footprint"?

                        If that VM is a pure control plane, then I can reboot or patch it without impacting network or storage IO on the other VMs in the same way I can restart management agents on ESXi or KVM right? If Hyper-V is handling the network and storage traffic 100% then surely it must have its own driver stack, and not be dependent on the management VM for these functions, right?
                        Unless this has changed, you lost every VM on a host from a simple reboot of the management VM previously.

                        You expect Microsoft to do things rationally, or correctly? That'd be a nice change of pace.

                        S black3dynamiteB 2 Replies Last reply Reply Quote 0
                        • S
                          StorageNinja Vendor @travisdh1
                          last edited by StorageNinja

                          @travisdh1 said in HyperV Partitioning:

                          You expect Microsoft to do things rationally, or correctly? That'd be a nice change of pace.

                          My point is that things in the IO path go through that VM. They didn't want to write a full IO driver stack for Hyper-V so they have the VM for that. Compute/Memory doesn't go through it (that I know of), but network and disk IO do. (Otherwise Perfmon wouldn't work as a monitoring solution on the host).

                          AFAIK only ESXi uses a microkernel that has a fully isolated management agent plane (It's actually just a busybox shell).

                          O 1 Reply Last reply Reply Quote 0
                          • black3dynamiteB
                            black3dynamite @travisdh1
                            last edited by

                            @travisdh1 said in HyperV Partitioning:

                            @storageninja said in HyperV Partitioning:

                            @tim_g said in HyperV Partitioning:

                            Windows Server runs on top, and every VM runs beside it. The only thing Windows Server can do, is manage the VMs using various components.
                            The Hyper-V Hypervisor is only 20 MB. It runs in memory. Not sure what you mean by "god awful footprint"?

                            If that VM is a pure control plane, then I can reboot or patch it without impacting network or storage IO on the other VMs in the same way I can restart management agents on ESXi or KVM right? If Hyper-V is handling the network and storage traffic 100% then surely it must have its own driver stack, and not be dependent on the management VM for these functions, right?
                            Unless this has changed, you lost every VM on a host from a simple reboot of the management VM previously.

                            You expect Microsoft to do things rationally, or correctly? That'd be a nice change of pace.

                            XenServer does the same thing too.

                            travisdh1T 1 Reply Last reply Reply Quote 2
                            • travisdh1T
                              travisdh1 @black3dynamite
                              last edited by

                              @black3dynamite said in HyperV Partitioning:

                              @travisdh1 said in HyperV Partitioning:

                              @storageninja said in HyperV Partitioning:

                              @tim_g said in HyperV Partitioning:

                              Windows Server runs on top, and every VM runs beside it. The only thing Windows Server can do, is manage the VMs using various components.
                              The Hyper-V Hypervisor is only 20 MB. It runs in memory. Not sure what you mean by "god awful footprint"?

                              If that VM is a pure control plane, then I can reboot or patch it without impacting network or storage IO on the other VMs in the same way I can restart management agents on ESXi or KVM right? If Hyper-V is handling the network and storage traffic 100% then surely it must have its own driver stack, and not be dependent on the management VM for these functions, right?
                              Unless this has changed, you lost every VM on a host from a simple reboot of the management VM previously.

                              You expect Microsoft to do things rationally, or correctly? That'd be a nice change of pace.

                              XenServer does the same thing too.

                              Where did I claim that anyone gets it right? Looks to me like only ESXi get's it with this particular issue.

                              1 Reply Last reply Reply Quote 0
                              • DustinB3403D
                                DustinB3403
                                last edited by

                                You can restart the toolstack on XenServer and not interfere with the daily operations of the VM's.

                                So um. . .

                                1 Reply Last reply Reply Quote 0
                                • O
                                  Obsolesce @StorageNinja
                                  last edited by Obsolesce

                                  @storageninja said in HyperV Partitioning:

                                  My point is that things in the IO path go through that VM. They didn't want to write a full IO driver stack for Hyper-V so they have the VM for that. Compute/Memory doesn't go through it (that I know of), but network and disk IO do. (Otherwise Perfmon wouldn't work as a monitoring solution on the host).

                                  I think you have some misconceptions or misunderstandings regarding the Hyper-V architecture and components... or Hyper-V stack...

                                  This is not true at all, as it actually depends on the OS running in a VM.

                                  Operating systems that already have the integration components baked into their kernel (Enlightened VMs) use their own Hypercalls to communicate directly to the hypervisor, then to the physical hardware.

                                  Only for non-supporeted (older) operating systems, does the "parent partition" intercept the VM communication, emulating Hypercalls. In this case, there are performance degradations as the management OS needs to work as a bridge to allow the VM to access the hardware.

                                  To note, this is why it's important for VMs to be running with the latest IC version.

                                  1 Reply Last reply Reply Quote 0
                                  • 1
                                  • 2
                                  • 1 / 2
                                  • First post
                                    Last post