Welcome to the Acumen Security Blog

San Bernardino, Paris, and the Implications for COTS Solutions Certification for Government Use

By now, nearly everyone in the Western world has heard of the on-going campaign by many law enforcement and intelligence agencies to get legislation requiring technology vendors, and in particular manufacturers of mobile devices, to offer back doors and/or off-by-default encryption. While this battle seems to by cyclical, rearing its head every few years, the latest push has been in response to the terror attacks in Paris, France and San Bernardino, CA, with the most recent events being a Federal court judge ordering Apple to come up with some mechanism to defeat the encryption (or, at the very least, prevent the auto-wipe from kicking in) to aid the FBI in accessing the device.

Many sources have already discussed the legal, moral and technical arguments both in favor and against such mandates. However, I’d like to explore them in a different context, specifically what such mechanisms might mean in the context of certifying commercial products for use in the Federal government, both in classified and unclassified networks.

The Federal government is encouraged to leverage COTS solutions where they may, as this reduces costs vs. having to contract specific solutions just for them, when they are trying to solve common problems that industry has already addressed. Of course, due to the sensitive nature of the work the government does, and the data that they possess, they have a need for products to meet certain minimum security baselines. This is where Common Criteria, Commercial Solutions for Classified (CSfC), the UC APL and FIPS come in.

While, due to restrictions on algorithms and cipher suites, it is common for many commercial products to have a “FIPS mode,” it is less common to have a “Common Criteria” mode. Many vendors find it convenient to just make sure that their product matches up with the requirements, which serve as sort of a high-level, categorical threat model. Building upon these requirements, CSFC helps provide assurances that commercial products are suitably safe to be used in higher-security environments.

Looking at the Security Problems Definitions and Security Objectives in the Protection Profile for Mobile Device Fundamentals version 2.0, we see that many of the identified threats and objectives deal with exactly the issues that are currently being brought up in the Paris and San Bernardino cases:

  •  (T.EAVESDROP) An attacker is positioned on a wireless communications channel or elsewhere on the network infrastructure. Attackers may monitor and gain access to data exchanged between the Mobile Device and other endpoints.
  • (T.PHYSICAL) The loss or theft of the Mobile Device may give rise to loss of confidentiality of user data including credentials. These physical access threats may involve attacks which attempt to access the device through external hardware ports, through its user interface, and also through direct and possibly destructive access to its storage media. The goal of such attacks is to access data from a lost or stolen device which is not expected to return to its user.
  • (O.COMMS) To address the network eavesdropping and network attack threats described in Section 3.1, concerning wireless transmission of Enterprise and user data and configuration data between the TOE and remote network entities, conformant TOEs will use a trusted communication path.
  • (O.STORAGE) To address the issue of loss of confidentiality of user data in the event of loss of a Mobile Device (T.PHYSICAL), conformant TOEs will use data-at-rest protection. The TOE will be capable of encrypting data and keys stored on the device and will prevent unauthorized access to encrypted data.
  • (O.AUTH) To address the issue of loss of confidentiality of user data in the event of loss of a Mobile Device (T.PHYSICAL), users are required to enter an authentication factor to the device prior to accessing protected functionality and data. … Repeated attempts by a user to authorize to the TSF will be limited or throttled to enforce a delay between unsuccessful attempts.

In the SFRs themselves, we also find that FCS_CKM_EXT.5 deals with TSF wipe, requiring that protected data be securely erased, including key destruction.

Looking at the Data At Rest (DAR) Capability Package v2.8, IAD specifically mentions backdoors as a potential threat in the supply chain of the device:

  • Threat actions include manufacturing faulty or counterfeit parts of components that can be used to disrupt system or network performance, leaving open back doors in hardware that allow attackers easy ways to attack and evade monitoring, as well as easy ways to steal data or tamper with the integrity of existing/new data

This same language is echoed in the Mobile Access CP v1.1.

The language in the CC and CSfC documents, basically identifying everything that the FBI is asking for as potential threat which needs to be countered in any commercial product that wants to be available to use by the government, leads one to understand why the current DIRNSA, Adm. Mike Rodgers, recently made statements in favor of strong cryptography.

Of course, the NSA hasn’t always taken such stances. Many people will remember the Dual_EC_DRBG fiasco from several years ago, which recently reared its ugly head again in CVE-2015-7755, wherein a different malicious Q-value had been inserted into the code, and another bug leveraged to keep the TDES whitening from taking place. Once a backdoor is in place, it is there for everyone, no matter what color hat the person taking advantage of it is wearing. For another example, we can look back to the Athens Affair, wherein mechanisms mandated for CALEA compliance were hijacked and used by still unknown perpetrators to spy on high-level officials in the Greek government.

In order to meet some hypothetical legislative requirement for either backdoors, or weak encryption, vendors might have to have split code bases, and possibly split build environments to make sure that weakened code doesn’t accidentally make it into the “good” code base. Many vendors may choose not to go through the trouble or incur the costs, limiting the available options for COTS solutions available for the government.

Additionally, assuming that vendors want to play by the split rules, would that mean requiring additional assurance activities requiring labs to verify that the backdoors are not present in the code, and that the binaries submitted for evaluation? This would, of course, most certainly require disclosure of the nature of the backdoor mechanisms to the evaluation labs, widening the loop of people with knowledge of the specifics in each product, as well as surely slowing down the evaluation process, once companies had re-architected their products and build systems to support the new mandates.

Even if we ignore the real effects on the privacy and security of private-sector users by weakening encryption or adding back doors to systems, exposing everyone to greater risk from “cyber” crime, we at least have a logical inconsistency wherein the Federal Government wants to be able to purchase commercial products with strong cryptographic systems, but certain parts of that same government do not want those products to be produced as-is, thereby potentially limiting their own access.

Then again, if the back door in the MIKEY-SAKKE VoIP protocol designed for the UK by GCHQ is a precedent, maybe the FBI is willing to follow suit, no matter what the economic, social, or security consequences.

Intrusion Prevention System (IPS) Extended Package (EP) Update Published

In late January 2016 an updated version of the Intrusion Prevention System (IPS) Extended Package (EP) was released. Although the changes to the EP itself are minor, changes to its scope may make this update significant for vendors seeking accreditation.

At first glance version 2.1 of the IPS EP is nearly identical to version 2.0 which preceded it. The scope of the EP remains the same, as are the threats it addresses and its objectives. Both the required and optional Security Functional Requirements (SFRs) have not been altered, nor have there been any changes to the Assurance Activities (AAs). What has changed is the Protection Profile (PP) that the EP can be used with. Version 2.0 of the IPS EP could only be used to extended the collaborative Protection Profile for Network Devices (NDcPP), whereas version 2.1 can be applied to products going against the NDcPP or the collaborative Protection Profile for Stateful Traffic Filter Firewalls (FWcPP). Under the old version of the EP vendors did have the option of certifying their products against the FWcPP as well, but this would not have freed them from NDcPP requirements. With the changes to version 2.1 vendors can now add the IPS EPcPP (whose official short name is still PP_NDcPP_IPS_EP despite the change of applicable PPs) to a FWcPP evaluation without going against NDcPP as well.

As of today there are no products on the Product Compliant List (PCL) or officially in evaluation that go against the FWcPP or the IPS EP so the full real life implications of this change are yet to be seen. It is interesting to note that although the NDcPP and FWcPP do not support distributed TOEs the IPS EP does allow different SFRs to be enforced by distributed TOE components, as long as those components are all capable of meeting NDcPP or FWcPP requirements on their own.