Solved XenServer - Specify cores per VM
-
So just had a random thought (sorry) which is it possible to specify which cores of my Hypervisor to use for specific VM?
If so where/how would you configure this as there is no option within the default configuration.
Would there be any pro or cons to doing this?
-
This is called "core affinity" and is very bad except for extremely specific use cases for things where you are doing very specific NUMA tuning or need to trade throughput for latency like low latency trading (which does not get virtualized.) 99.999% of the time, this would be a very bad idea and is not exposed for a reason, because people would cripple their installs. You don't want this unless you really, really understand NUMA and cache hits on the CPU cache and how load balancing will go out the window.
Basically the system is tuned for throughput. Core Affinity is tuning against throughput. Which there are use cases for, but extremely few.
-
Thanks for the answer.
I'm not looking to do it on my installs, but was curious if it was even doable.
Thanks for explaining.
-
did you just listen to Security Now? or read about the new bleed flaw? lol
-
@Dashrender nope, just a random thought
-
@Dashrender said:
did you just listen to Security Now? or read about the new bleed flaw? lol
Why, what were they talking about?
-
Adding to what @scottalanmiller already said. I learned about this back in the early 2000s, in SGI's sysadmin training courses. The OS had a way to assign a process to a CPU, but they told you not to bother as process scheduler would still assign other processes to that core. The only "correct" way to assign processes to cores was within the programming. They actually had a library for many programming languages that would properly communicate to the process scheduler in the OS. Of course the largest single system image deployment of any of those sysadmins in the class was a 2000 cpu machine (this was 2001). Of course that was spread out among lots of racks, which causes delays if a process got spread out to random processors and memory banks all across the system. Today's x86 processors and operating systems at least don't have as many problems with delays due to "I have a CPU requesting a memory page that is 300 feet away" issue!
-
@travisdh1 Linux does not have that problem today. Process affinity and pinning work without a problem. The issue is, is that it is like using a separate hard drive for every process rather than sharing a RAID array. There are very specific times that it makes sense, but for 99.99% of workloads if you try to do that you will cripple your system.
-
@scottalanmiller said:
@travisdh1 Linux does not have that problem today. Process affinity and pinning work without a problem. The issue is, is that it is like using a separate hard drive for every process rather than sharing a RAID array. There are very specific times that it makes sense, but for 99.99% of workloads if you try to do that you will cripple your system.
Right. They were doing things a little oddly just because of the physical size of the systems at the time.
-
@scottalanmiller said:
@Dashrender said:
did you just listen to Security Now? or read about the new bleed flaw? lol
Why, what were they talking about?
A new hack called CacheBleed - the ability for one process to detect cache collisions inside a hyper threaded core and through them extract things like PGP private keys.
It requires things like what Dustin is asking about - I thought his question was appropo considering I had just heard about it early yesterday.
-
@Dashrender said:
@scottalanmiller said:
@Dashrender said:
did you just listen to Security Now? or read about the new bleed flaw? lol
Why, what were they talking about?
A new hack called CacheBleed - the ability for one process to detect cache collisions inside a hyper threaded core and through them extract things like PGP private keys.
It requires things like what Dustin is asking about - I thought his question was appropo considering I had just heard about it early yesterday.
I see. That's really targeted at processes on a CPU, not for VMs. Not that there isn't a theory that VMs could be affected but there are a few factors...
- CacheBleed requires that you know a LOT about the other processes that are running to figure out what is causing the cache to react that way.
- Requires that your workloads remain on a single CPU.
Processing affinity would actually dramatically increase this risk rather than lower it. The best defence is the native process floating system because you never know what other system shares your cache.
In cloud envirnments you are generally protected because even if you discover a key, you never know what key you discovered. It would be like walking in a field and finding a house key without any markings on it. You assume it opens a door somewhere. But what door, and where?
-
Yep, that's why it's currently not a real concern.
-
@Dashrender said:
Yep, that's why it's currently not a real concern.
And only certain Intel CPUs. AMD users are in the clear right now.