Monday, July 25, 2022

Challenges and Opportunities

Many of us who have been in the evaluation and certification (validation) business  have seen the development, not only of security requirements and schemes, but also how the “security echo system” works. A few weeks ago, I was generously given the opportunity to share some ideas at the EU CSA conference in Brussels. Here is a short summary of the ideas behind that presentation.

What makes a scheme successful?
No scheme will survive without a market demand. Just being technically brilliant and formally correct will not make it a successful scheme. We have seen quite a number of schemes being established and operated over the years. Quite often, the scheme developers are technicians with a focus on requirements and formalism, however a successful scheme needs:

  • Market demand (without a demand no use)
  • Credibility (both requirements and scheme operation)
  • Wide recognition in its target areas (geographical and over industries)
  • Reasonable effort (cost and time effective)
  • Availability of competence and resources (mainly personnel)
  • Maintenance (over time and ability to adapt)
  • Pragmatism (not losing touch with reality) 

Market demand is most important. Sometimes, even technically “poor” schemes may turn out to be successful just because they are there to meet a market demand. Any imperfections of a scheme that meets market demand may be fixed over time because it will be used.

Give them the third best to go on with; the second best comes too late, the best never comes.
— Watson-Watt in Louis Brown “Technical and Military Imperatives” (1) 

What are the security trends and why?
The first published security criteria came with the development of the Trusted Computer Security Evaluation Criteria (TCSEC) in the U.S., usually called the Orange Book. These requirements were specific to operating systems used within the U.S. DoD to protect classified information, implementing the Bell-LaPadula security model. Later, additional requirements were added for (database) applications, such as the Trusted Database Interpretation of the TCSEC (TDI), and for interconnection such as the Trusted Network Interpretation of the TCSEC (TNI), along with a whole series of nicely written documents describing maintenance, integrity, audit, etc.

The TCSEC was a development from specific requirements to generic requirements, by creating TCSEC interpretations. Still, the main problem of the TCSEC was that they combined functionality and assurance – more security functionality came along with higher assurance requirements. The change came with  developments in Europe, especially with the German Security Evaluation Criteria which were the first to decouple functionality and assurance and required the developer to describe his security functionality in a ‘Security Requirements Document’, now called a Security Target. This approach was adopted by the European ITSEC, the U.S. Federal Criteria and finally with the Common Criteria, which are all very general security criteria and not product type specific requirements, sort of meta requirements providing ‘building blocks’ one could choose from and refine for specific products or product types. Only in their application they became specific with Protection Profiles and Security Targets. Having a common base was a major advancement to avoid fragmentation of criteria and schemes. It meant that criteria would be suitable not only for different, but also future products, unknown to the criteria developer.

Today we see a trend of moving back to specific requirements, even branch specific requirements and schemes, at the same time products are being developed for and used in different branches using the idea of Protection Profiles as first defined in the U.S. Federal Criteria and later adopted by the Common Criteria.

It’s easier to apply and ensure consistent, repeatable and reproducible testing against specific criteria than against generic meta criteria, such as classic Common Criteria. However, it requires fast and effective maintenance of those specific criteria to keep them up-to-date with the technology. Otherwise it may take years before new products can be evaluated, just because the criteria development usually takes too much time and may not start until products are available and in demand by security aware customers.

Finally, if different sectors have different criteria, fragmentation will cause additional costs for vendors if products are used in different markets with their own criteria such as the government, telecom, vehicle, financial industry, etc.

It is obvious to everyone that the pace of the IT industry has changed, with short development cycles of new product versions and new features. The development cycles may easily be shorter than the time necessary for evaluation and certifications, meaning only outdated product will be certified. So customers will be using either outdated or uncertified products. Also, an evaluation may not only confirm security but also detect deficiencies that then will be fixed by developers. These fixes should not only be made to outdated versions for them to be certified but rather (and more importantly) to the newer versions being deployed.

The obvious solution would be to focus both on the development methods and processes as well as on the products. Security is not a property that comes with an evaluation – it is a property that has to be built into the product, and the purpose of the evaluation is to confirm this. This has long been known by the quality management community but seems largely ignored by the security evaluation and certification community. Years ago, in preparation of the new version of Common Criteria, BSI initiated a project on “predictive assurance” focusing on the development methods and processes (2). A project that, for different reasons, was never finished, unfortunately. However, a few other schemes have picked up the idea.

The real value of tests is not that they detect bugs in the code,
but that they detect inadequacies in the methods, concentration,
and skills of those who design and produce the code.

— C. A. R. Hoare, How Did Software Get So Reliable Without Proof? (3) 

So, how can criteria and scheme development be improved? Here are a few suggestions:

  • Strong industry involvement is essential, mainly for input from development processes, developer tools, and new technologies.
  • Product life-cycle is usually fast, which either means fast certifications or certification of the development and maintenance processes.
  • Consider the developer, development processes, and product life-cycle. That’s where assurance actually starts.
  • We don't need criteria for the technology of yesterday but for the technology of today and technology that we may not even know of. So, we need criteria that are either so generic that they will work or we need very good criteria maintenance.
  • International cooperation and recognition is key. Criteria may not be able to handle all national aspects, but there is still no need to reinvent the wheel.
  • Be pragmatic. Decide on what is good enough and fit for purpose.

Striving to better, oft we mar what's well
— Duke of Albany in Shakespeare's King Lear

(1) Louis Brown, Technical and Military Imperatives, A Radar History of World War II, 1999.
(2) Irmela Ruhrmann, Predictive assurance, BSI, 9 ICCC, Jeju, Korea September 2008.
(3) C. A. R. Hoare, How Did Software Get So Reliable Without Proof?, Industrial Benefit and Advances in Formal Methods‚ Third International Symposium of Formal Methods Europe‚ Oxford‚ UK‚ March 18−22‚ 1996.

No comments:

Post a Comment

Comments are moderated with the goal of reducing spam. This means that there may be a delay before your comment shows up.