Tuesday, August 6, 2013

How long is a piece of string?

"How long is a piece of string?" is an English idiom; it is something that you say when someone asks you a question that you cannot answer about how big something is or how much time something will take.

This was the phrase that came to mind when we saw the old, 2005,  NSTISSP#11 guidance from the CCEVS: "The goal established for low assurance products (i.e., EAL 1) is 30-90 days of evaluation time and will cost less than higher assurance (e.g., EAL4) evaluations."

First, an important caveat: It seems that since 2005 at least, the NIAP have been leaders in the field to try and address the topic of providing timely information assurance, which is a problem that has also been expressed more widely within the CC community. So please do not see this blog as being focused particularly at NIAP. It is relevant to any scheme aiming to bound the length of an "evaluation." In fact, the very first key point in the CCMC's Vision Statement is: "The general security level of general ICT COTS certified products needs to be raised without severely impacting price and timely availability of these products."

The issue of evaluations taking too long is a perennial problem that the CC community hopes to address together. Of course, the U.S. scheme is amongst the schemes represented in the community, and the NIAP are lucky enough to have their own national feedback in the form of reviews and studies that are part of the checks and balances in place in the U.S. For example, in 2006 the IDA published, "Review of the National Information Assurance Partnership (NIAP)."

This report gave attention to efficiency and effectiveness of the U.S. scheme and includes the statement: "Evaluation costs are too high and they take too long. These are common complaints, particularly from small businesses." The topic of appropriate metrics is pervasive throughout the report. In fact, just after that report was published, U.S. labs saw a flurry of activity from NIAP hoping to identify and establish appropriate measures, amongst them was the elapsed time for an evaluation project.

This is an important metric, BUT it needs to be defined correctly and appropriately if it is to be useful.

I contend that what NIAP really meant to say in their old NSTISSP#11 guidance was:

"The goal established for low assurance products (i.e., EAL 1) is 30-90 days of validation time and will cost less than higher assurance (e.g., EAL4) evaluations."

Consider that the evaluation portion of a project is the work undertaken by the lab in close cooperation with the vendor.  How long this takes depends on how you define "evaluation project."
  • For developers, it starts the moment they decide that they may need to perform a CC project.
  • For labs and consultants, it starts the moment a developer hires them.
  • For schemes, it starts when a project is formally "entering evaluation."

Let's consider a typical project, identifying the following key "phases" of an evaluation project from the point of view of the sponsor of a project.

1. Decision: The decision process when a vendor identifies the requirements for the "project:" factors include their target market and the associated requirements for CC compliance, which scheme to use, which national policies apply, identifying a laboratory and any needed consultancy, defining the TOE, and selecting appropriate PPs. Depending on prevalent policy, this phase may include the need to develop a suitable PP before a product can be evaluated.

2. Review: A review of the product against the identified requirements, which is often made with a consultant. This may mean that organizational processes or product changes need to be made. The axiom, "the earlier in the cycle, the cheaper it is to fix" may well apply.

3. Evaluation: The laboratory performs the evaluation. Currently some schemes are closely involved with this task. Oversight of the laboratory by the scheme is close. Other schemes identify particular checkpoints. (For example, NIAP's Validation Oversight Reviews (VOR), which is a type of formal technical review.)

4. Validation: The scheme performs the validation of the lab's work. Historically this was performed hand-in-hand with the evaluation work performed by the lab. This phase usually includes discussion and negotiation of the technical and policy details and interpretations are issued and managed. Once the validator is satisfied, the process culminates with the issuance of a certificate.
Here is a typical waterfall view. I don't for a moment think that any of the community runs projects like this! Notice that the timeline is in quarters of a year (i.e., 3 months). The durations given are typical, but in real life are dependent on many factors and can be much shorter or longer.

So how are projects currently organized?

Assuming that an appropriate PP exists,or is not specified, in atsec's experience we are used to seeing a model like this:

For developers with complex international markets, the decision phase can be quite lengthy. Once the basic parameters are set, getting the consultants in early can help shorten the project and reduce some of the project risks. Some technologies or products are naturally more complex and take longer (the shaded areas in Evaluation and Validation). The "Review" phase can be significantly lengthened if this is the first time a developer's project team has performed a CC project, or if significant process or product changes are needed.

One important point to remember is that one of the project dependencies is the availability of scheme (validator) resource. In the past, we have seen projects delayed in more than one scheme because of availability of scheme resources.

What about schemes that require a PP that doesn't yet exist?

NIAP have been demanding that the community write a PP first (and cPPs for the wider CC community are part of the Vision). As we know, developing a PP is a lengthy process. At the moment, it typically takes close to two years. Of course, as more and more PPs and cPPs become available, then this issue will reduce. The quality of the PP, and how much interpretation or guidance in its application is needed will affect all the subsequent projects.

This scenario is adding huge costs to the developers, immensely extending the timeline for projects, and given the typical product life-cycle, means that for many newly issued certificates the related products are no longer current (a problem that already existed, but that this scenario exacerbates).

Consider also that the real world is not so simple. For developers with a defined certification strategy, several projects may need to be coordinated and budgeted, and not just CC projects. When timescales can be variable, it can make program management very difficult indeed. 

What will a defined validation period mean?

Granted, at lower assurance and with improved processes, the time for evaluation will reduce. Of course we need to consider that for many of the developers that address international markets and must or want-to demonstrate higher assurance in the same project or run a NIAP project concurrently with other projects than the natural approach is to have the lab perform the evaluation ahead of the formal "entry" into the NIAP program. In effect, the NIAP's 90 days becomes an additional overhead as the participants strive to reduce the risk of last-minute surprises.

So making such a mandate may do little more than add an extra 90 days to the schedule, reduce the involvement of the validator in the evaluation work, and increase project associated costs to the developer.

An additional item for the efficiency of this model is the documentation, consistency and dissemination of any interpretations and policy changes. Without the developers and labs having access to a formal set of precedent rulings, prior adjudications and  interpretations, the validation phase will inevitably extend as a result and may result in confusion, inconsistent application of precedents and perhaps worse, inconsistent and even unfair evaluations. As new PPs are applied, then we find the need for interpretations, new policy and for potentially inconsistent applications inevitably increasing. Of course this may result in new versions of PPs, and the need to consider transition plans throughout the review and decision phases. For the "old" CC model of course we had the "interpretations" mechanism at both local and international levels, but for the new NIAP paradigm, many policies are not public, are disseminated to labs directly and may not even be formally and reliably fed back into the initiating technical community.

Without existing PPs, as is currently the case for many developers,the project will take at least 3 years!

What about the scheme metrics for the length of an evaluation?

Well, it depends how the scheme defines what an "evaluation project" is.

NIAP, like many schemes, define the start point as when an evaluation project is formally registered with the scheme.

So, for the scheme, the measurement is often made from the time an application to the scheme occurs until the certificate is issued. The time is variable, but over the years many schemes have found the need to set limits. These limits have been imposed to address abuse. (For example, by using the official: "In evaluation" status as a substitute for the certificate when dealing with procurement requirements, or because a ridiculously long project consumes valuable scheme resources.) NIAP's new policy is to reduce the length of this phase of the project to 90 days.

In order to meet such a timescale for the validation phase, the bulk of the evaluation work must be completed first. (It is the process of evaluation, or pre-evaluation that identifies all the inconsistencies that need to be cleared up in order to keep the "validation phase" to minimum length.)

How smoothly a validation goes (and hence how fast it can be done) depends on the ability to engage with the scheme to allow project members to ask for interpretations, the fine details of policy, and so on, means that "surprises" at the end of the process can be forestalled. This is especially important when a model is used where the validation runs very quickly as a "check" at the end of the project. With the validator assigned at the beginning of the lab engagement, many questions and issues can easily be ironed out.

Examples of this kind  of project model are adopted by many conformance testing programs. Let's use the CMVP (NIST's crypto module program) as an example.

Look what happened! Here the final "validation" phase became the main variable in project length which, due to resource variability in the program, fluctuates; in recent years, between three and ten months. Could this be a lesson learned that CC schemes should pay attention to?


Summary: Just how long is a piece of string?

The answer is:
  • For the scheme, NIAP: 90 days. 
  • For the rest of the community it is the same length it was before, plus 90 days. 
Thoughts, comments and feedback are welcome from the CC community.

P.S. If you are interested in this topic, then I will highlight the accepted paper at ICCC this year by Courtney Cavness entitled: "Faster Evaluations: A Matter of Timing." Check the ICCC program for the abstract and timing.

By Fiona Pattinson

1 comment:

  1. The "efficiency and effectiveness" requirements are only being considered from the Scheme's point of view. There is no consideration for the requirements of the other organisations that are involved.


Comments are moderated with the goal of reducing spam. This means that there may be a delay before your comment shows up.