The world was rocked recently by news of a serious flaw in all Intel processors made in the last 10 years or so.  Then we found out even worse news – there are 2 related flaws, and not just Intel processors are impacted.

All modern processors perform “speculative processing” to increase processing speed.  Instructions are not simply executed sequentially.  Instead, multiple instructions are executed in parallel, with the processor speculating things like which branch in a series of instructions will be taken.  If the speculation is correct, instructions appear to have been executed amazingly fast.  If not, the results of the incorrect speculation are discarded and processing is slower.

Unfortunately, while the direct results of this speculation are easily discarded, not all the side effects are.  For example, the branch predictor in the processor (guessing which path in a program will be taken) retains traces of the discarded processing.  And more importantly, data referenced by the speculative processing may have been loaded into the processor cache.

Separation of address spaces via virtual addressing is a fundamental premise of all modern computers.  User-space application programs can’t access kernel (operating system) memory.  Different processes can’t access each other’s memory.  Different guest virtual machines can’t access hypervisor or each other’s memory.

But it turns out that speculative processing can result in information leakage between these entities.  While details of the flaws are being closely held for obvious reasons, it is known that at minimum data that should be restricted can be inferred.   The more aggressive speculative processing is performed, the easier it is to exploit these flaws.

The processors themselves can’t be fixed or patched to eliminate these flaws.  Instead, patches are being developed for operating systems to change the way they use the processors.  This will result in performance hits on all systems.  While the extent of the hit varies significantly on the processing being performed, estimates range anywhere from 5% to 50%.  Ouch.  While new processors (available when?) are anticipated to eliminate the flaws, existing processors will remain in use for years to come.

At first glance, this would appear to primarily impact multi-user systems.  Unfortunately, this flaw can be exploited by malware executing on single-user systems (our phones and laptops) to gain access to restricted data as well.  And in case this isn’t obvious, cloud processing relies heavily on virtual machines so that is impacted as well.

The more significant of the flaws has been named Meltdown, and impacts Intel and at least some ARM chips.   Given the ubiquity of those processor types, the fact that not every processor is affected is no great comfort.  This flaw involves changes made to the branch predictor table during speculative execution and apparently results in potential access to significant amounts of restricted data.

The other flaw is named Spectre and likely impacts all current processors (since they all do speculative processing to some extent).  This flaw involves the separation between array boundary checking and the actual array access; speculative processing can defeat checking for invalid array indices and allow access to invalid ranges.

Patches are far into development for Windows, Linux and other operating systems, and are in the process of being released.  Unfortunately, some adverse effects are being noted.  The extent to which these patches will be deployed is not clear.  And recent news is that these patches can result in more frequent reboots, even on Intel’s newest processors.

More detail on these flaws can be found in numerous articles on the web.  I recommend a series of easy to read articles by Ars Technica providing a near real-time story of the Meltdown and Spectre revelations as well as this article that references 5 articles covering the full saga of the flaws.

In Part 2 of this blog, we will focus on the way these flaws may impact Common Criteria (CC) evaluations.