Single-CPU programs running on Hyper-Threading-enabled quadcore CPU

Ray picture Ray · May 22, 2012 · Viewed 8.5k times · Source

I'm a researcher in statistical pattern recognition, and I often run simulations that run for many days. I'm running Ubuntu 12.04 with Linux 3.2.0-24-generic, which, as I understand, supports multicore and hyper-threading. With my Intel Core i7 Sandy Bridge Quadcore with HTT, I often run 4 simulations (programs that take a long time) at the same time. Before I ask my question, here are the things that I already (think I) know.

  • My OS (Ubuntu 12.04) detects 8 CPUs due to hyper-threading.
  • The scheduler in my OS is clever enough never to schedule two programs to run on two logical (virtual) cores belonging to the same physical core, because the OS supports SMP (Simultaneous Multi-Threading).
  • I have read the Wikipedia page on Hyper-Threading.
  • I have read the HowStuffWorks page on Sandy Bridge.

OK, my question is as follows. When I run 4 simulations (programs) on my computer at the same time, they each run on a separate physical core. However, due to hyper-threading, each physical core is split into two logical cores. Therefore, is it true that each of the physical cores is only using half of its full capacity to run each of my simulations?

Thank you very much in advance. If any part of my question is not clear, please let me know.

Answer

xIcarus picture xIcarus · Apr 1, 2015

This answer is probably late, but I see that nobody offered an accurate description of what's going on under the hood.

To answer your question, no, one thread will not use half a core. One thread can work inside the core at a time, but that one thread can saturate the whole core processing power.

Assume thread 1 and thread 2 belong to core #0. Thread 1 can saturate the whole core's processing power, while thread 2 waits for the other thread to end its execution. It's a serialized execution, not parallel.

At a glance, it looks like that extra thread is useless. I mean the core can process 1 thread at once right?

Correct, but there are situations in which the cores are actually idling because of 2 important factors:

  • cache miss
  • branch misprediction

Cache miss

When it receives a task, the CPU searches inside its own cache for the memory addresses it needs to work with. In many scenarios the memory data is so scattered that it is physically impossible to keep all the required address ranges inside the cache (since the cache does have a limited capacity).

When the CPU doesn't find what it needs inside the cache, it has to access the RAM. The RAM itself is fast, but it pales compared to the CPU's on-die cache. The RAM's latency is the main issue here.

While the RAM is being accessed, the core is stalled. It's not doing anything. This is not noticeable because all these components work at a ridiculous speed anyway and you wouldn't notice it through some CPU load software, but it stacks additively. One cache miss after another and another hampers the overall performance quite noticeably. This is where the second thread comes into play. While the core is stalled waiting for data, the second thread moves in to keep the core busy. Thus, you mostly negate the performance impact of core stalls.

I say mostly because the second thread can also stall the core if another cache miss happens, but the likelihood of 2 threads missing the cache in a row instead of 1 thread is much lower.

Branch misprediction

Branch prediction is when you have a code path with more than one possible result. The most basic branching code would be an if statement. Modern CPUs have branch prediction algorithms embedded into their microcode which try to predict the execution path of a piece of code. These predictors are actually quite sophisticated and although I don't have solid data on prediction rate, I do recall reading some articles a while back stating that Intel's Sandy Bridge architecture has an average successful branch prediction rate of over 90%.

When the CPU hits a piece of branching code, it practically chooses one path (path which the predictor thinks is the right one) and executes it. Meanwhile, another part of the core evaluates the branching expression to see if the branch predictor was indeed right or not. This is called speculative execution. This works similarly to 2 different threads: one evaluates the expression, and the other executes one of the possible paths in advance.

From here we have 2 possible scenarios:

  1. The predictor was correct. Execution continues normally from the speculative branch which was already being executed while the code path was being decided upon.
  2. The predictor was wrong. The entire pipeline which was processing the wrong branch has to be flushed and start over from the correct branch. OR, the readily available thread can come in and simply execute while the mess caused by the misprediction is resolved. This is the second use of hyperthreading. Branch prediction on average speeds up execution considerably since it has a very high rate of success. But performance does incur quite a penalty when the prediction is wrong.

Branch prediction is not a major factor of performance degradation since, like I said, the correct prediction rate is quite high. But cache misses are a problem and will continue to be a problem in certain scenarios.

From my experience hyperthreading does help out quite a bit with 3D rendering (which I do as a hobby). I've noticed improvements of 20-30% depending on the size of the scenes and materials/textures required. Huge scenes use huge amounts of RAM making cache misses far more likely. Hyperthreading helps a lot in overcoming these misses.