RAID fumble.
-
At the time that HyperV was designed, there was only one architecture for enterprise type 1 virtualization: hypervisor with a "control" or "Dom0" driver environment. All three players, Xen, VMware and HyperV, did the exact same thing. Xen and HyperV have never varied from this.
In the time since then, KVM emerged with having the hypervisor itself contain all of the drivers and extra components and VMware ESXi 5 was completely overhauled to move to that model. So now, the field is 50/50 having or not having a control environment separate from the hypervisor kernel and the two camps argue as to which design is better (one is faster, in theory, and the other is more stable and safe - but in reality Xen continues to have performance advantages that the other three lack.)
-
Microsoft and VMware both made VirtualBox competitors back in the day. Microsoft VirtualPC and Virtual Server 2005 were both type 2 hypervisors. VMware made GSX and later VMware Server. These were type 2 (run on top of an OS) and were abysmally slow and VMware's offerings were buggy. VMware still offers the silly Workstation product that I can't believe anyone would buy. And on the Mac it is popular to use Fusion, Parallels and other type 2 hypervisors.
But in reality, the Type 2 market is completely owned by Oracle with VirtualBox, it is the stand out leader in its field with the biggest market, most advanced code base and most active development.
-
@scottalanmiller said:
@creayt said:
I'm not saying you're wrong, I'm just saying that it's fascinating and I'd like to learn more. It's especially interesting because Windows 8.1 ( even Home I believe ) supports Hyper V so if it's converting the existing host OS into a VM it somehow maintains full, unthrottled performance in that new host VM and fully utilizes the real hardware drivers ( such as video ).
It does not, it loses performance and this is why VirtualBox is considered superior for that type of use since it is a Type 2 hypervisor, not a Type 1.
This stuff is blowing my mind! Very informative, thank you.
So is it safe to assume that if you uninstall Hyper V from say a Windows 8.1 Home laptop it reverts to the original pre-bare-metal-hypervisor state? Or does it stay in degraded performance mode for time and all eternity until the case of a full host OS reinstall?
-
@creayt said:
So is it safe to assume that if you uninstall Hyper V from say a Windows 8.1 Home laptop it reverts to the original pre-bare-metal-hypervisor state? Or does it stay in degraded performance mode for time and all eternity until the case of a full host OS reinstall?
Yes, it reverts.
Imagine that it is almost like dual booting. When HyperV is active it boots to HyperV secretly, then fires up Windows 8.1 in a VM and shows you the console directly to the screen so that, other than the performance loss, you can't tell what it has done.
If you remove HyperV, the boot loader just points directly to the Windows 8.1 install rather than the HyperV install and Windows 8.1 boots without knowing that HyperV isn't there or that it used to be, to Windows 8.1 it is always booting on its own like normal.
So it is like dual booting trickery combined with console redirects.
-
I see.
So as far as it working this way, compared to just running as a layer on top of the host OS like type 2 does, is there anything worth thinking about / applicable to any decision making beyond "if we run it this way, the performance of the host won't be as good as its specs would indicate"?
Also, is there a way to guess/gauge how much of a performance hit the host will have w/ type 1?
Also, is the host VM's performance almost equivalent to the full host's hardware resources, or closer to a pretty limited VM?
-
@creayt said:
Also, is the host VM's performance almost equivalent to the full host's hardware resources, or closer to a pretty limited VM?
All VMs are pretty close to 98%+ these days. The impact is pretty minimal. And if you go with full PV on Xen (not available for Windows) it's more like 99%+.
-
@creayt said:
Also, is there a way to guess/gauge how much of a performance hit the host will have w/ type 1?
So low that generally you don't bother Basically 1-2% tops. Only places where it really matters is in low latency applications, like high velocity trading where you are working in the nanosecond measurement range.
-
@creayt said:
So as far as it working this way, compared to just running as a layer on top of the host OS like type 2 does, is there anything worth thinking about / applicable to any decision making beyond "if we run it this way, the performance of the host won't be as good as its specs would indicate"?
Better stability, of course. There is a reason that a type 2 is considered out of the question for server workloads. A type 2 breaks a basic rule of IT..... never run an unvirtualized workload unless there is no other option. If your using a type 2, the most important OS isn't protected by virtualization!
-
As we're down the garden path with VM's, is there one yet that will let me run games properly in Windows with full hardware (video card) performance? One day I will have my dream of "alt-tab'ing" to another OS on the same desktop, with full hardware performance for each one (or even just one of my choosing)
-
@MattSpeller said:
As we're down the garden path with VM's, is there one yet that will let me run games properly in Windows with full hardware (video card) performance? One day I will have my dream of "alt-tab'ing" to another OS on the same desktop, with full hardware performance for each one (or even just one of my choosing)
Good luck ever seeing your dream happen.
-
@thecreativeone91 yep, I am resigned to that though ever hopeful
-
@MattSpeller said:
As we're down the garden path with VM's, is there one yet that will let me run games properly in Windows with full hardware (video card) performance? One day I will have my dream of "alt-tab'ing" to another OS on the same desktop, with full hardware performance for each one (or even just one of my choosing)
Not really, you could, in theory, if you move the entire hypervisor layer into hardware a la Power then you can do this and have been able to for decades, BUT this is a trick because the overhead remains but we've renamed it so that technically the full hardware is available to you, but it is no faster.
-
Back to the original discussion:
It turns out we want to deploy the additional SSDs now anyway, so will be reconstructing the RAID. Which brings me to my next question.
I had a previous discussion about overprovisioning the drives pre-RAID in hopes of reaping the performance and longevity affects of doing so even though our RAID controller doesn't support trim. So, I prepped the drives using Samsung Magician to manually configure somewhere around 20% of overprovisioning space ( up from the 7% default that ships w/ the 1TB 850 Pros, and had the datacenter RAID 10 them up.
It appears that the overprovisioning setup is little more than resizing the main partition of the drive to allocate a larger amount of unused space, which presumably the SSD knows internally to use for self-love and management. Thus, when the datacenter peeps created the RAID 10 it removed all partitions from all drives and used the full capacity of the drives for the RAID.
So the million dollar question is:
What's the better approach: 1. Configuring the new Raid 10 using only 80% of the total available space, and hoping that has the same result or 2. Using the full space for the virtual disk and then shrinking the main host OS partition inside of Windows Server to create an excess at that level? It seems like the first approach would have the best chance of doing what we want but at the same time I don't know whether it'll evenly distribute the unused 20% across the 10 drives or potentially just leave it floating at the end of the last drive or two.
See this post for the original strategy us folk at Mango Lassi arrived at for background, which ended up not panning out unfortunately: http://mangolassi.it/topic/4704/help-w-raid
Edit: Oops, wrong link. Correct one: http://mangolassi.it/topic/4614/how-should-i-determine-exact-over-provisioning-levels-for-1tb-samsung-850-pro-ssds-to-be-used-in-a-raid-10
Thanks!
-
@creayt said:
Thus, when the datacenter peeps created the RAID 10 it removed all partitions from all drives and used the full capacity of the drives for the RAID.
So this is a process question.... but what are people in the datacenter doing System Admin tasks? I've seen places do this before, but it seems like a bad idea. There is no need for a NOC / DC tech to be doing this and the SA always has to double check it anyway and there is a lot of room for error. And when you want to tweak things, like this, the process gets broken and it doesn't hold up anyway.
Why not let the DC do the physical work and leave the system's configuration to the systems people?
-
@creayt said:
What's the better approach: 1. Configuring the new Raid 10 using only 80% of the total available space, and hoping that has the same result
Can't imagine how that would have the same result as the RAID controller has already provisioned the drives to 100%.
-
@scottalanmiller said:
@creayt said:
What's the better approach: 1. Configuring the new Raid 10 using only 80% of the total available space, and hoping that has the same result
Can't imagine how that would have the same result as the RAID controller has already provisioned the drives to 100%.
I'm not sure I got what you meant but we're adding 4 new identical SSDs ( for a total of 10 drives ) and redoing the RAID from scratch next week.
-
@scottalanmiller said:
So this is a process question.... but what are people in the datacenter doing System Admin tasks? I've seen places do this before, but it seems like a bad idea. There is no need for a NOC / DC tech to be doing this and the SA always has to double check it anyway and there is a lot of room for error. And when you want to tweak things, like this, the process gets broken and it doesn't hold up anyway.
Why not let the DC do the physical work and leave the system's configuration to the systems people?
I actually don't know what any of those acronyms are LOL. I'm a web developer and this is my new server and it's colocated in a datacenter a few states away and at this point they have to do any and all non-remote desktop tasks, there's just no other option. It's got a DRAC card but I'm new to servers and learning this as I go and that's not set up ( yet ).
-
@creayt said:
@scottalanmiller said:
@creayt said:
What's the better approach: 1. Configuring the new Raid 10 using only 80% of the total available space, and hoping that has the same result
Can't imagine how that would have the same result as the RAID controller has already provisioned the drives to 100%.
I'm not sure I got what you meant but we're adding 4 new identical SSDs ( for a total of 10 drives ) and redoing the RAID from scratch next week.
When you put them into RAID, the controller should be fully provisioning the drives at the drive level and then presenting you with only part of it. That you are only using part of the drive's capacity seems like it is part of it to you but to the drive, it has been fully provisioned.
-
@creayt said:
2. Using the full space for the virtual disk and then shrinking the main host OS partition inside of Windows Server to create an excess at that level? It seems like the first approach would have the best chance of doing what we want but at the same time I don't know whether it'll evenly distribute the unused 20% across the 10 drives or potentially just leave it floating at the end of the last drive or two.Same problem here. Using only part of the storage "somewhere up the stack" won't be visible to the drives or even to the RAID controller.
-
@creayt said:
@scottalanmiller said:
@creayt said:
Thus, when the datacenter peeps created the RAID 10 it removed all partitions from all drives and used the full capacity of the drives for the RAID.
So this is a process question.... but what are people in the datacenter doing System Admin tasks? I've seen places do this before, but it seems like a bad idea. There is no need for a NOC / DC tech to be doing this and the SA always has to double check it anyway and there is a lot of room for error. And when you want to tweak things, like this, the process gets broken and it doesn't hold up anyway.
Why not let the DC do the physical work and leave the system's configuration to the systems people?
I actually don't know what any of those acronyms are LOL. I'm a web developer and this is my new server and it's colocated in a datacenter a few states away and at this point they have to do any and all non-remote desktop tasks, there's just no other option. It's got a DRAC card but I'm new to servers and learning this as I go and that's not set up ( yet ).
NOC = Network Operations Center
DC = Data Center
SA = System AdministratorSomeone correct me if I'm wrong!