Monday, October 15, 2012

CC and the development of security requirements

The annual International Common Criteria Conference (ICCC) in Paris took place a few weeks ago. In the opening presentation, Mr Patrick Pailloux, Director General of l'Agence Nationale de la Sécurité des Systèmes d'Information (ANSSI), noted that the people developing and maintaining the Common Criteria are very smart people. He then went on to say:

"... globally, our community gets a toolbox which has been tuned for more than thirty years (if we go back to the Orange book), and our priority must be to use this toolbox in the best way to achieve our goal.

If for one reason or another, this toolbox must be reconsidered, I’m sure that we would be able to build another one. But would we be able to federate as many countries and users around a new agreement for this toolbox?


Additionally, the best tools are useless without good craftsmen.
.."



This is true.

It is obvious that very clever people participated in the formation of the Common Criteria from those involved in the initial Orange Book and the rainbow series, to the different national criteria in Europe and Canada, the ITSEC, and the Federal Criteria. The criteria, standards, and methodologies that evolved were possible because of the cooperative exchange and vetting of (good) ideas.

(For those who need a reminder of the evolution of the Common Criteria, please take a look at Helmut Kurth's "Quo Vadis Common Criteria" )

At previous ICCCs, there were many discussions aimed towards updating to Common Criteria version 4; a lot of effort was made to modernize the Common Criteria by including topics such as: predictive assurance, site certificates, supply chains, cloud computing, assurance when dealing with third party code, and so on. This was not triggered by an eagerness for change, but by the necessity to recognize how modern IT development occurs. Remember that many CC requirements were based on the methods  of IT development used decades ago, and thus, the resulting criteria actually reflect how IT systems were developed during the eighties.

But today's hot topic within the CC community (especially with the CCRA members) is the development, use, and recognition of collaborative Protection Profiles (cPPs). Undoubtedly this is a very important issue; having PPs that are widely endorsed by both government and industry is vital to keeping global, mutual recognition alive and well.

However, the problem of how to achieve such a goal is riddled with unanswered questions that require discussion and agreement of the details. How will directing all our effort to cPP development affect the needed modernization of the CC? Sadly, over the last few years there has been less and less discussion about how the Common Criteria standards themselves should evolve (and in Paris there was no discussion of any proposed evolution to the standard itself). In fact, the latest version (CC 3.1 release 4) was limited to only minor changes. 

According to the CCMB's vision statement (item 6), "The CC will be maintained as the toolbox used by the TCs (technical communities) to develop the cPPs." But the level of maintenance and how it will be provided remains unclear. Certainly, the level of maintenance of the CC during the past decade gives little confidence of future maintenance.

At the ICCC in Paris, more than one of the CCRA member nation presentations declared that they are moving away from the EAL packages and toward cPPs. These cPPs will include a set of functional and assurance requirements agreed to by the experts within the associated technical community and will be ratified by the CCMB. This means starting off with generic functional and assurance requirements and ending up with a specific cPP tailored to a specific technology. With the exception of the extensive tailoring of assurance requirements, this is not only consistent with the original idea of PPs, but is an excellent way to achieve consistent evaluation results (by providing a least common denominator) for the security functionality of similar products.

However, for products that do not -- or can not -- claim conformance to a cPP, the proposal is that mutual recognition of an evaluation be restricted to an EAL2. The argument being that the current standards, with their lack of detail in the methods for some assurance requirements, are inadequate to assure conformity between laboratories and schemes. The implication and resulting hope being that the cPPs will address this deficiency in the CC standards by specifying an appropriate methodology and guidance for each technology type. 

To return to the words of Mr Patrick Pailloux: 
"The certification of a security product must assure the user that the best efforts have been done to verify the robustness of the product with its security countermeasures for a given level of resistance and a given level of confidence.
The vulnerability analysis should be at the heart of our concerns and if I had a regret to formulate, it’s that the visibility of this part of the evaluation work is lost in the other analysis required by the CC."



The development of cPPs alone will not fix this problem. 

Punting the problem of a broken tool to multiple technical communities without providing any guidance or ensuring expertise in vulnerability analysis will not solve the problem of inconsistent evaluations. What will instead happen is that each community will solve the vulnerability analysis problem in a different way. If a particular technical community doesn't happen to have an expert in vulnerability analysis, then a potentially false perception of robustness in the procurement community will be perpetuated.

By failing to evolve the tools in our toolbox, we risk losing an opportunity to draw from the years of past experience that we have gained. 

Currently, the CC standards are maintained by the CCDB. By limiting the mutual recognition to EAL2 and cPPs, there will most likely be less interest by the CCRA members to maintain or even further develop the CC, especially not for assurance levels above EAL2.

When a cPP is developed, the experts will draw from their experience of previous evaluations within their technology area. Potentially, technology-specific vulnerability analysis techniques will be captured, and the assurance requirements as we currently know them in the Common Criteria will be used. However, this structure means risking the ability to transfer knowledge from one technology specific cPP to another. Certainly there seems to be no mechanism in place to do that today.

Increasing assurance is orthogonal to cost reduction. The switch to cPPs will not address that fact. Only by making the tools themselves more efficient can we hope to address both goals of increasing assurance and managing costs.

We must fix the known problems with vulnerability analysis (and
the lack of detail in the methods for some assurance requirements) and modernize the tools we use to evaluate modern IT development, or else risk promulgating the problem.

If cPPs do not get codified to provide higher assurance, and we restrict most other evaluations to EAL2 then we run the risk of a brain drain. Evaluation labs and the CC community at large may lose expert resources. The longer we stay in "low-assurance mode," the more pronounced this problem will be.

What about the assurance requirements? It is true they are mostly focused on the analysis of documentation. They were designed to ensure that the quality of the documentation needed to perform vulnerability analysis is in place. As vulnerability analysis evolves so should the supporting assurance requirements. 

The vulnerability analysis section of the toolbox includes many methods and techniques, but to apply these properly, the professional analyst requires quality input regarding the TOE itself. After all, we know that black box testing can only provide the same chance of discovering flaws that an attacker has. White box testing provides the opportunity to gain an edge over the attacker.

The point is that we need the complete assurance requirements toolbox when developing cPPs. Increasing levels of assurance (higher EALs) ensure that the information needed for increasing depths of vulnerability analysis is provided to the vulnerability analyst. Without assurance of the quality of the input, the vulnerability analysis may be flawed. Simply removing EALs from cPPs does not remove the need for the corresponding information that is necessary for the vulnerability analysis.

Today, consistency across evaluations may be variable, but there are people in the community with immense experience and the skills to perform quality, high-assurance evaluations. Their skills should be used to provide a better toolbox by mending our broken tools.

Only then can technical communities produce cPPs useful in evaluating products in a way provides a consistent level of confidence in the CC evaluation itself. Allowing technical communities to develop cPPs without reference to each other will end up providing those who use COTS products to build security solutions with a broken toolbox!

The fear is that by not maintaining / improving the CC, and continuing to accept low assurance certificates (either low assurance cPPs or evaluations at EAL2) that use a specification and methodology that is known to be flawed, we end up doing the opposite of our intention  -- adding nothing but delay and less trust and confidence in evaluation results. 

Meanwhile, knowledge of evaluating the robustness and resistant products at higher assurance levels may well disappear from this community and we may well lose the ability to even produce higher assurance cPPs in the future.



by Staffan Persson
Head of the Swedish ITSEF 
and CEO of atsec Germany and Sweden

3 comments:

  1. I too am very concerned about this trend to 'dumb down common criteria'. This trend does not represent a step forward, but a step backward. In attempting to limit evaluations to very basic levels we are not providing sufficient information that acquiring entities can use to determine if products meet their security requirements. Some of the proposals for higher level certifications requiring (or desiring) more detailed information may lead us back into something akin to an A1 evaluation process (as described in the Orange Book).

    Perhaps an alternative direction, to dumbing things down, might be to provide standard enhanced basic vulnerability testing as part of all evaluations. This activity would involve running common exploit and penetration tools against products as part of the standard certification process. We could treat this as a basic test. If the tools succeed, then the product doesn’t get certified.
    Just a thought...

    Ps. How will this change impact export control?

    ReplyDelete
  2. I would research far more on this subject if the important details provided were as awesome as what you've released in this post. Never quit looking after about the details you make.
    IT Development

    ReplyDelete
  3. I do accept as true with all the concepts you have
    offered for your post. They are really convincing and will
    certainly work. Nonetheless, the posts are very brief for starters.

    Could you please lengthen them a bit from subsequent time?

    Thank you for the post.

    my web-site: Website Vulnerability Testing

    ReplyDelete

Comments are moderated with the goal of reducing spam. This means that there may be a delay before your comment shows up.