On July 16th, Tony and I attended the bi-annual NIAP Validator/CCTL workshop. Being a newly minted candidate CCTL, this was the first workshop of it’s kind that we attended and have to say, we were impressed by the level and quality of discussion.

The idea behind the workshop is to get NIAP, validators (from both Aerospace and Mitre) and CCTLs (under US scheme) in one room and discuss recent changes/clarifications in process/PPs/rulings, upcoming changes to PPs/process and perhaps most importantly brainstorm on tough problems NIAP is grappling with (think virtualization, equivalency, objective and repeatable requirements etc.).

Below are some highlights from the discussions that might be of interest to readers of this blog:

Using PP based evals to create DoD STIGS: NIAP is working to make creating a STIG easier and this is good news for vendors. Creating STIGs is by no means a simple task and if the task can be made easier by doing lot of the work as part of a CC evaluation, it is a win-win. The Mobile Device Management PP was the first PP to include a DoD Annex mapping CC requirements to STIG requirements. A vendor taking a product through evaluation against this PP, could choose to claim conformance to the DoD Annex as well. If such a claim is made, the results of the CC evaluation can be used to formulate the STIG.

While this is a step in the right direction, Acumen would like to see more collaboration between NIAP and DISA to eventually lead to folding the DoD UCAPL IA requirements into CC evaluation. It makes little sense for vendors to have to do essentially two IA evaluations (DoD UCAPL IA and CC) for the same customer (USG).

– CCRA: As indicated on the common criteria portal, all 26 nations signatory to the CCRA have agreed to the new CCRA. This is a significant accomplishment which took lot of time, effort and diplomacy.

NIAP PP evals and FIPS 140 validations: As per current rules, in order to finish a NIAP PP based CC eval, vendors ONLY need CAVP algorithm certificates. However down the line (once CMVP queue times reduce) NIAP indicated that there could be a requirement for FIPS complete.

Entropy testing: This has been a favorite topic for at least year and half and for good reason. Entropy is a difficult topic and becomes even more difficult when it comes to formulating objective test requirements. However there might finally be light at the end of the tunnel. There is expectation that SP 800-90B (pdf link) will be finalized next year (with an updated draft sometime this year) and as soon as that is done, CAVP will release an Entropy test tool (something they are already working on). Once this happens, entropy testing will essentially be treated as any other algorithm testing and will have algorithm certificate numbers. NIAP will then just point to the CAVP cert. As per current thinking, NIAP will still require the Entropy Assessment Report (EAR) but believes it will be significantly different requiring much less information.

Another (pragmatic) clarification was provided around entropy usage in ANSI X9.31 RNG. ANSI X9.31 allows seeds only up to 128 bits. This creates an issue when using AES 192 or 256 options for X9.31 since enough entropy (to meet key strength requirements) cannot be provided by just the seed. Under such circumstances, IAD will accept the key providing entropy as long as the key provides full entropy and is treated and secured the same way as entropy.

PP maintenance pilot: NIAP plans to move away from Erratas to regularly maintained revisions. They will be using the Mobility PPs as a Pilot. Current proposal is for minor updates to be released “as-needed”  and major releases annually. It should be noted that Mobility PPs, due to nature of technology, will be revved annually. However other PPs are expected to have less frequent revisions.

Equivalency argument: NIAP wants to move away from the subjective aspect of determining equivalency between various models being evaluated as part of the same effort. This was discussed in a breakout session. While no concrete criteria were finalized, the general theme of the discussion revolved around identifying few exclusionary criteria (e.g. if binaries are not same across the models/configurations equivalency cannot be claimed), vendor attestation and publicly posting rationale for equivalency (truth in marketing).

Apps on OS PP: As part of this discussion one of the requirement put forward was that apps can only be evaluated on platforms that themselves have gone through an evaluation against an appropriate PP. Since there isn’t a PP for desktop/server OS, this requirement would in the near term apply only to mobile apps. While the reasoning behind this requirement is understandable, the concern (shared by Acumen and number of participants) is that now app vendors are beholden to platform vendors to evaluate their devices/OS. This becomes even murkier once you start looking at individual functionality and whether the platform vendor has evaluated all the functionality that is leveraged/used by the app. Considering that only one mobile vendor has evaluated their platform, this is a significant hurdle that mobile app vendors will have to overcome.

So there you have it folks. As you can see it was a good day’s worth of discussion that we hope will be continued forward. The next workshop is in October and Acumen will report back the highlights again… so watch this space.
If you have any questions please reach out to us via email, facebook, twitter or good ol’ phone call.