RAID fumble.
-
@dafyre said:
So what, once I've installed Hyper-V on my Windows Server 2012 R2 box, it essentially becomes the dom0 equivalent in Xen?
Yes, that's absolutely exactly what it does.
-
@scottalanmiller said:
The role thing really gets people. There is no such thing as running HyperV on Windows Server, it looks that way but technically cannot happen. The "role" is not a role but is just a tool that introduces a shim and installs HyperV to the bare metal and then runs the Windows Server that you are looking at in a VM. It's all on HyperV, nothing on Windows. There is no exception to this. It just looks like something else.
Do you have some links to documentation on this?
If what you're saying is true, there are some weird implications:
-
The host OS "vm" has the special privilege of seemingly-unvirtualized/full access to its hardware resources and does not allow for any granular allocation/limitation on things like processor weighting or RAM ( for example, this special VM can see the real network adapters, not virtualized ones like all of the VMs do ).
-
All other VMs are dependent on this new host VM to run. They cannot run if the host OS isn't up and running. Correct me if I'm wrong.
-
The underlying files that run the host VM's OS are still on the main-level of the drive hardware, so not encapsulated into a .vhd, right?
I'm not saying you're wrong, I'm just saying that it's fascinating and I'd like to learn more. It's especially interesting because Windows 8.1 ( even Home I believe ) supports Hyper V so if it's converting the existing host OS into a VM it somehow maintains full, unthrottled performance in that new host VM and fully utilizes the real hardware drivers ( such as video ).
-
-
@creayt said:
Do you have some links to documentation on this?
https://en.wikipedia.org/wiki/Hyper-V
https://technet.microsoft.com/library/cc816638(WS.10).aspx
Microsoft has been super clear on HyperV being a native / bare metal / type 1 hypervisor since day one. Only this type of hypervisor is considered remotely acceptable for server virtualization. HyperV is part of the big four enterprise type 1 hypervisors: VMware ESXi, HyperV, KVM and Xen.
-
@creayt said:
- The host OS "vm" has the special privilege of seemingly-unvirtualized/full access to its hardware resources and does not allow for any granular allocation/limitation on things like processor weighting or RAM ( for example, this special VM can see the real network adapters, not virtualized ones like all of the VMs do ).
Yes, that is correct. That's what @dafyre and I were talking about in comparing to the Dom0. The "control" VM or the Dom0 has special access used for drivers, monitoring, etc. Microsoft, to make this "easier" for people refers to this VM as the "physical" VM, even though it is obviously not physical and cannot be without making the product a toy like VirtualBox would be.
-
@creayt said:
- All other VMs are dependent on this new host VM to run. They cannot run if the host OS isn't up and running. Correct me if I'm wrong.
Correct. Again modeled directly off of how Xen and ESXi 4 and earlier were designed. There is nothing even slightly abnormal here.
-
@creayt said:
- The underlying files that run the host VM's OS are still on the main-level of the drive hardware, so not encapsulated into a .vhd, right?
Correct, it reads them raw.
-
@creayt said:
I'm not saying you're wrong, I'm just saying that it's fascinating and I'd like to learn more. It's especially interesting because Windows 8.1 ( even Home I believe ) supports Hyper V so if it's converting the existing host OS into a VM it somehow maintains full, unthrottled performance in that new host VM and fully utilizes the real hardware drivers ( such as video ).
It does not, it loses performance and this is why VirtualBox is considered superior for that type of use since it is a Type 2 hypervisor, not a Type 1.
-
At the time that HyperV was designed, there was only one architecture for enterprise type 1 virtualization: hypervisor with a "control" or "Dom0" driver environment. All three players, Xen, VMware and HyperV, did the exact same thing. Xen and HyperV have never varied from this.
In the time since then, KVM emerged with having the hypervisor itself contain all of the drivers and extra components and VMware ESXi 5 was completely overhauled to move to that model. So now, the field is 50/50 having or not having a control environment separate from the hypervisor kernel and the two camps argue as to which design is better (one is faster, in theory, and the other is more stable and safe - but in reality Xen continues to have performance advantages that the other three lack.)
-
Microsoft and VMware both made VirtualBox competitors back in the day. Microsoft VirtualPC and Virtual Server 2005 were both type 2 hypervisors. VMware made GSX and later VMware Server. These were type 2 (run on top of an OS) and were abysmally slow and VMware's offerings were buggy. VMware still offers the silly Workstation product that I can't believe anyone would buy. And on the Mac it is popular to use Fusion, Parallels and other type 2 hypervisors.
But in reality, the Type 2 market is completely owned by Oracle with VirtualBox, it is the stand out leader in its field with the biggest market, most advanced code base and most active development.
-
@scottalanmiller said:
@creayt said:
I'm not saying you're wrong, I'm just saying that it's fascinating and I'd like to learn more. It's especially interesting because Windows 8.1 ( even Home I believe ) supports Hyper V so if it's converting the existing host OS into a VM it somehow maintains full, unthrottled performance in that new host VM and fully utilizes the real hardware drivers ( such as video ).
It does not, it loses performance and this is why VirtualBox is considered superior for that type of use since it is a Type 2 hypervisor, not a Type 1.
This stuff is blowing my mind! Very informative, thank you.
So is it safe to assume that if you uninstall Hyper V from say a Windows 8.1 Home laptop it reverts to the original pre-bare-metal-hypervisor state? Or does it stay in degraded performance mode for time and all eternity until the case of a full host OS reinstall?
-
@creayt said:
So is it safe to assume that if you uninstall Hyper V from say a Windows 8.1 Home laptop it reverts to the original pre-bare-metal-hypervisor state? Or does it stay in degraded performance mode for time and all eternity until the case of a full host OS reinstall?
Yes, it reverts.
Imagine that it is almost like dual booting. When HyperV is active it boots to HyperV secretly, then fires up Windows 8.1 in a VM and shows you the console directly to the screen so that, other than the performance loss, you can't tell what it has done.
If you remove HyperV, the boot loader just points directly to the Windows 8.1 install rather than the HyperV install and Windows 8.1 boots without knowing that HyperV isn't there or that it used to be, to Windows 8.1 it is always booting on its own like normal.
So it is like dual booting trickery combined with console redirects.
-
I see.
So as far as it working this way, compared to just running as a layer on top of the host OS like type 2 does, is there anything worth thinking about / applicable to any decision making beyond "if we run it this way, the performance of the host won't be as good as its specs would indicate"?
Also, is there a way to guess/gauge how much of a performance hit the host will have w/ type 1?
Also, is the host VM's performance almost equivalent to the full host's hardware resources, or closer to a pretty limited VM?
-
@creayt said:
Also, is the host VM's performance almost equivalent to the full host's hardware resources, or closer to a pretty limited VM?
All VMs are pretty close to 98%+ these days. The impact is pretty minimal. And if you go with full PV on Xen (not available for Windows) it's more like 99%+.
-
@creayt said:
Also, is there a way to guess/gauge how much of a performance hit the host will have w/ type 1?
So low that generally you don't bother Basically 1-2% tops. Only places where it really matters is in low latency applications, like high velocity trading where you are working in the nanosecond measurement range.
-
@creayt said:
So as far as it working this way, compared to just running as a layer on top of the host OS like type 2 does, is there anything worth thinking about / applicable to any decision making beyond "if we run it this way, the performance of the host won't be as good as its specs would indicate"?
Better stability, of course. There is a reason that a type 2 is considered out of the question for server workloads. A type 2 breaks a basic rule of IT..... never run an unvirtualized workload unless there is no other option. If your using a type 2, the most important OS isn't protected by virtualization!
-
As we're down the garden path with VM's, is there one yet that will let me run games properly in Windows with full hardware (video card) performance? One day I will have my dream of "alt-tab'ing" to another OS on the same desktop, with full hardware performance for each one (or even just one of my choosing)
-
@MattSpeller said:
As we're down the garden path with VM's, is there one yet that will let me run games properly in Windows with full hardware (video card) performance? One day I will have my dream of "alt-tab'ing" to another OS on the same desktop, with full hardware performance for each one (or even just one of my choosing)
Good luck ever seeing your dream happen.
-
@thecreativeone91 yep, I am resigned to that though ever hopeful
-
@MattSpeller said:
As we're down the garden path with VM's, is there one yet that will let me run games properly in Windows with full hardware (video card) performance? One day I will have my dream of "alt-tab'ing" to another OS on the same desktop, with full hardware performance for each one (or even just one of my choosing)
Not really, you could, in theory, if you move the entire hypervisor layer into hardware a la Power then you can do this and have been able to for decades, BUT this is a trick because the overhead remains but we've renamed it so that technically the full hardware is available to you, but it is no faster.
-
Back to the original discussion:
It turns out we want to deploy the additional SSDs now anyway, so will be reconstructing the RAID. Which brings me to my next question.
I had a previous discussion about overprovisioning the drives pre-RAID in hopes of reaping the performance and longevity affects of doing so even though our RAID controller doesn't support trim. So, I prepped the drives using Samsung Magician to manually configure somewhere around 20% of overprovisioning space ( up from the 7% default that ships w/ the 1TB 850 Pros, and had the datacenter RAID 10 them up.
It appears that the overprovisioning setup is little more than resizing the main partition of the drive to allocate a larger amount of unused space, which presumably the SSD knows internally to use for self-love and management. Thus, when the datacenter peeps created the RAID 10 it removed all partitions from all drives and used the full capacity of the drives for the RAID.
So the million dollar question is:
What's the better approach: 1. Configuring the new Raid 10 using only 80% of the total available space, and hoping that has the same result or 2. Using the full space for the virtual disk and then shrinking the main host OS partition inside of Windows Server to create an excess at that level? It seems like the first approach would have the best chance of doing what we want but at the same time I don't know whether it'll evenly distribute the unused 20% across the 10 drives or potentially just leave it floating at the end of the last drive or two.
See this post for the original strategy us folk at Mango Lassi arrived at for background, which ended up not panning out unfortunately: http://mangolassi.it/topic/4704/help-w-raid
Edit: Oops, wrong link. Correct one: http://mangolassi.it/topic/4614/how-should-i-determine-exact-over-provisioning-levels-for-1tb-samsung-850-pro-ssds-to-be-used-in-a-raid-10
Thanks!