Monday, December 4, 2023

A FIPS 140-3 compliant hybrid KEM algorithm

Hybrid KEM - Kyber & X25519

In addition to the sole use of Kyber KEM, a hybrid mechanism using X25519 can be devised that acts as a drop-in replacement for Kyber KEM. In this case, a PQC algorithm is merged with a classic key establishment algorithm. The basis is the enhancement of the Kyber KEM encapsulation and decapsulation algorithms as follows.

When using the hybrid KEX algorithm, instead of the sole KEM encapsulation and decapsulation operations, the hybrid variants that are outlined in the subsequent subsections are used. In addition, the Kyber KEX data along with the X25519 data is exchanged in the same manner as outlined for the standalone Kyber KEX. Thus, the KEX operation is not re-iterated here.

The presented algorithm ensures that even if one algorithm is compromised, the resulting shared secret is still cryptographically strong and compliant with the strength of the uncompromised algorithm. However, it is to be noted that Kyber may have a cryptographic strength of up to 256 bits when using Kyber 1024. On the other hand, the cryptographic strength of X25519 is significantly lower - between 80 and 128 bits - depending on the analysis approach.

Hybrid KEM Key Generation

As part of the hybrid KEM key generation, the following steps are performed:

  1. Generation of the Kyber key pair yielding the Kyber pk_kyber and sk_kyber.
  2. Generation of the X25519 key pair yielding the X25519 pk_x25519 and sk_x25519.

Both public keys and both secret keys are maintained together so that every time the hybrid KEM requires a public key, the Kyber and X25519 public keys are provided. The same applies to the secret keys.

Thus the following holds:

  • pk_hybrid = pk_kyber || pk_x25519
  • sk_hybrid = sk_kyber || sk_x25519

Both, pk_hybrid and sk_hybrid are the output of the hybrid KEM key generation operation.

Hybrid KEM Encapsulation

The hybrid KEM encapsulation applies the following steps using the input of the hybrid KEM public key pk_hybrid:

  1. Invocation of the Kyber encapsulation operation to generate the Kyber shared secret ss_kyber and the Kyber ciphertext ct_kyber using the pk_kyber public key presented with pk_hybrid.
  2. Generation of an ephemeral X25519 key pair pk_x25519_e and sk_x25519_e.
  3. Invocation of the X25519 Diffie-Hellman operation with the X25519 public key pk_x25519 provided via pk_hybrid and the ephemeral secret key sk_x25519_e. This generates the shared secret ss_x25519.
  4. Secure deletion of the sk_x25519_e ephemeral secret key.
    The operation returns the following data:
    • Public data: ct_hybrid = ct_kyber || pk_x25519_e
    • Secret data: ss_hybrid = ss_kyber || ss_x25519

The data ct_hybrid is to be shared with the peer that performs the decapsulation operation.

On the other hand ss_hybrid is the raw shared secret obtained as part of the encapsulation operation and must remain secret. It is processed with a KDF as outlined in section Hybrid KEM Shared Secret Derivation below.

Hybrid KEM Decapsulation

The hybrid KEM decapsulation applies the following steps using the input of the hybrid KEM secret key sk_hybrid and the public data resulting from the hybrid KEM encapsulation operation ct_hybrid.

  1. Invocation of the Kyber decapsulation operation to generate the Kyber shared secret ss_kyber by using ct_kyber present in ct_hybrid and the Kyber secret key sk_kyber found in sk_hybrid.
  2. Invocation of the X25519 Diffie-Hellman operation with the X25519 secret key sk_x25519 provided via sk_hybrid and the ephemeral public key pk_x25519_e provided via ct_hybrid which returns the shared secret ss_x25519.

The operation returns the following data:

  • Secret data: ss_hybrid = ss_kyber || ss_x25519

The data of ss_hybrid is the raw shared secret obtained as part of the encapsulation operation and must remain secret - it is the same data as calculated during the encapsulation step. It is processed with a KDF as outlined in the section Hybrid KEM Shared Secret Derivation below.

Hybrid KEM Shared Secret Derivation

To obtain a shared secret of arbitrary length that can be used as key material, a key derivation function is used as allowed by SP800-56C rev 2 section 2:

  • The chosen and KDF is based on SP800-108 rev 1.
  • In addition, the input to the KDF is formatted such that the entire hybrid KEM construction is compliant with SP800-56C rev 2 assuming that Kyber KEM is the approved algorithm and X25519 provides an auxiliary key agreement mechanism. Thus, section 2 of SP800-56C rev 2 with its requirement Z' = Z || T is fulfilled by defining the "standard" shared secret Z is provided by Kyber and that the auxiliary shared secret T is provided by X25519.

Considering that Kyber uses SHAKE / SHA-3 in its internal processing, the selected KDF is KMAC256 as defined in SP800-108 rev 1. KMAC is invoked as follows:

        KMAC256(K = ss_hybrid,
                X = ct_hybrid,
                L = requested SS length,
                S = "Kyber X25519 KEM SS")

When considering the structure of ss_hybrid and ct_hybrid, the KDF operates on the following specific data:

        KMAC256(K = ss_kyber || ss_x25519,
                X = ct_kyber || pk_x25519_e,
                L = requested SS length,
                S = "Kyber X25519 KEM SS")

The KMAC customization string S is selected arbitrarily and can contain any string including the NULL string.

The result of the KDF is intended to be usable as key material for other cryptographic operations. That derived key material now contains the individual security strengths of both Kyber and X25519. Both algorithms are used such that any security break of either algorithm will not impact the strength of the resulting shared secret of the respective other. By concatenating the individual shared secret values as input into the KDF, the result of the KDF will have the security strength of one algorithm even if the respective other algorithm is broken.

Hybrid KEX Algorithm

Using the hybrid KEM algorithm outlined in the preceding subsections, the hybrid KEX algorithm as specified in the documentation of the secure connection approach can be obtained by the following considerations: use of the Kyber KEX approach outlined at the beginning, but apply the following changes:

  1. Replace all occurrences of pk with pk_hybrid.
  2. Replace all occurrences of sk with sk_hybrid.
  3. Replace all occurrences of ss with ss_hybrid.
  4. Replace all occurrences of ct with ct_hybrid.
  5. Replace all invocations of the Kyber standalone functions (key generation, encapsulation, decapsulation) with their respective hybrid variants outlined above.

This implies that the hybrid KEM as well as the hybrid KEX algorithms are usable as a direct drop-in replacement for the standalone Kyber algorithm use case. The only difference is that the resulting data is larger as it contains the X25519 data as well.

You can download a PDF version of the process here.

An implementation of both hybrid KEM and hybrid KEX is provided here.

Friday, November 17, 2023

PQC: Kyber and Dilithium - State of the (Draft) Standards

by Stephan Mueller


On August 24 2023 NIST published the first drafts of:

  • FIPS 203 specifying Module-Lattice-based Key-Encapsulation Mechanism (ML-KEM) which is based on CRYSTALS Kyber;
  • FIPS 204 specifying Module-Lattice-Based Digital Signature (ML-DSA) which is based on CRYSTALS Dilithium; and
  • FIPS 205 specifying Stateless Hash-Based Digital Signature (SLH-DSA) which is based on SPHINCS+.

On November 15 2023 NIST announced that the three algorithms will be available for testing at the ACVP Demo service. During the course of the development of both Kyber and Dilithium reference implementations, NIST developers reached out to atsec to compare intermediate results of both algorithms with implementations available to atsec. This comparison covered all data calculated during the intermediate steps of the processing, including:


  • all steps of the key generation processing of ML-KEM.Keygen and K-PKE.KeyGen;
  • all steps of the key encapsulation processing of ML-KEM.Encaps and K-PKE.Encrypt;
  • all steps of the key decapsulation processing of ML-KEM.Decaps and K-PKE.Decrypt;
  • ensuring that all Kyber types of ML-KEM-512, ML-KEM-768, and ML-KEM-1024 are subject to the comparison work.


  • all steps of the key generation processing of ML-DSA.Keygen;
  • all steps of the key encapsulation processing of ML-DSA.Sign;
  • all steps of the key decapsulation processing of ML-DSA.Verify;
  • ensuring that all Dilithium types of ML-DSA-44, ML-DSA-65, and ML-DSA-87 are covered by the comparison analysis.

The implementation used by atsec was leancrypto, which was based on the Round 3 submission for CRYSTALS Kyber and CRYSTALS Dilithium at the time the collaboration with NIST was conducted. The NIST team as well as atsec identified several issues in the FIPS 203 and FIPS 204 draft standards, which are listed in their entirety below. The NIST team acknowledged that the respective issues will be eliminated in updates to both standards. This implies that developers using the current draft standards should be aware of those issues and upcoming modifications when basing their implementation on the respective FIPS draft standards.

The list of issues also shows their resolution at the time of writing. Please note that these updates are neither specified by the current standards nor endorsed by NIST yet. These changes were implemented in leancrypto and lead to consistent results compared to the implementations developed by NIST that are likely to be used as the ACVP reference implementations.


  • Algorithm 12 step 19 shows the dot-multiplication of the final part of the key generation: t = As + e. Step 19 specifies that AHat has to be used for the operation. However, in this step, the transposed version of AHat has to be used. This modification brings the FIPS 203 specification in line with the Round 3 submission of the CRYSTALS Kyber algorithm.


  • The size of the private key for Dilithium types is by 256 bits larger. This is due to the enlargement of the size of the hash of the public key referenced as tr, which was not taken into account by the draft FIPS 204.
  • The size of the signature for ML-DSA-65 is larger by 128 bits and for ML-DSA-87 is larger by 256 bits. This is due to increase of the size of c-tilde in FIPS 204, which was not considered for the calculation of the signature size.
  • Algorithm 2 specifies the size of the signature as output to be B^(32....) which needs actually is B^((lambda / 4)...) due to the increase of the c-tilde variable. Note, this change is already applied to the auxiliary algorithm specifications given in chapter 8 of FIPS 204.
  • Algorithm 3 requires the same change for the specification of the input signature as given for Algorithm 2 above.

An implementation that is compliant with the NIST implementation that includes all the mentioned fixes is provided with leancrypto. It allows developers to compile both Kyber and Dilithium in a debug mode where the calculation results of each step of the key generation, Kyber encapsulation and decapsulation, as well as the Dilithium signature generation and verification can be displayed. This allows other developers to compare their implementation to match with leancrypto. The following steps have to be taken to obtain the debug output after fetching the library from the provided link and making sure the meson build system is available:


  1. Setup of the build directory: meson setup build
  2. Configure Kyber debug mode: meson configure build -Dkyber_debug=enabled
  3. Compile the code: meson compile -C build
  4. Execute the test tool providing the output of Kyber, ML-KEM-1024: build/kem/tests/kyber_kem_tester_c
  5. To obtain the output for ML-KEM-768, enable it: meson configure build -Dkyber_strength=3
  6. Compile the code: meson compile -C build
  7. Execute the test tool providing the output of Kyber, ML-KEM-768: build/kem/tests/kyber_kem_tester_c
  8. To obtain the output for ML-KEM-512, enable it: meson configure build -Dkyber_strength=2
  9. Compile the code: meson compile -C build
  10. Execute the test tool providing the output of Kyber, ML-KEM-512: build/kem/tests/kyber_kem_tester_c


  1. Setup of the build directory (if it was not already set up for the Kyber tests): meson setup build
  2. Configure Dilithium debug mode: meson configure build -Ddilithium_debug=enabled
  3. Compile the code: meson compile -C build
  4. Execute the test tool providing the output of Dilithium, ML-DSA-87: build/signature/tests/dilithium_tester_c
  5. To obtain the output for ML-DSA-65, enable it: meson configure build -Ddilithium_strength=3
  6. Compile the code: meson compile -C build
  7. Execute the test tool providing the output of Dilithium, ML-DSA-65: build/signature/tests/dilithium_tester_c
  8. To obtain the output for ML-DSA-44, enable it: meson configure build -Ddilithium_strength=2
  9. Compile the code: meson compile -C build
  10. Execute the test tool providing the output of Dilithium, ML-DSA-44: build/signature/tests/dilithium_tester_c

The test tool outputs is segmented into the key generation steps, Dilithium signature generation and verification steps, as well as Kyber encapsulation and decapsulation steps. The output specifies the mathematical operation whose result is shown. When displaying the output of a vector, one line is used. When displaying a matrix, the output of one row of the matrix is displayed per line. This implies that as many lines are printed as rows are present in the matrix.

The debug logging information was used as a basis for the discussion with the NIST development team to verify that both implementations i.e. the NIST reference implementation as well as leancrypto, correspond.

Considering that the FIPS 203 draft also specifies a minimum input validation in sections 6.2 and 6.3, those checks are implemented with leancrypto in the function kyber_kem_iv_sk_modulus. The other checks requiring the size verification of the input data are implicit due to the used data types forcing the caller to provide exactly the required amount of data.

leancrypto can be found here:

Wednesday, November 15, 2023

atsec at the PCI Community Meeting 2023

atsec participated in the PCI (Payment Card Industry) Security Standards Council 2023 Asia-Pacific Community Meeting held in Kuala Lumpur, Malaysia, on 15 and 16 November and hosted a booth.

atsec’s principal consultant Di Li provided a presentation on “Our 'Key' Experience in PIN Security / P2PE / FIPS 140-3.”

A short summary of the presentation is as follows:
Regarding key generation, the paper discusses the generation requirements and methods defined in each of the three standards, compares the differences, and provides a rationale for why each standard requires a different approach. The section on key distribution and key establishment explores the different methods of securely transferring a key from one party to another. The paper defines each of these methods and provides common scenarios where they apply. The paper also provides several methods for key destruction, such as physical destruction, and logical cryptographic zeroization.


Tuesday, October 31, 2023

atsec at the International Common Criteria Conference 2023

As in previous years, atsec is attending the International Common Criteria Conference, this time in Washington DC from October 31st to November 2nd 2023.

We invite you to come and talk to us at our booth (#10) or attend our colleagues' contributions to the conference:

  • CC:2022 – How it Compares and Differs from CC3.1R5 (L21b) - Trang Huynh
  • Challenges in the Adoption of CC:2022 for Protection Profiles, PP Modules and Functional Packages (A22a) - Alejandro Masino
  • Panel Discussion: Evolution of the Cryptographic Standards Ecosystem (M22b) - Yi Mao et al.
  • MDM Server Certification Without NIAP’s MDM PP (D23b) - Michael Vogel

Happy Halloween!

Friday, October 13, 2023

Cybersecurity Requirements for Medical Devices

On September 26, 2023, The Food and Drug Administration (FDA) released their finalized Cybersecurity in Medical Devices: Quality System Considerations and Content of Premarket Submissions guidance document. This document provides general principles for device cybersecurity relevant to device manufacturers. It seeks to emphasize the importance of safeguarding medical devices throughout a product's life cycle. The guidelines are going beyond security risk management and cybersecurity testing; the guidance recommends that device manufacturers leverage security controls to achieve the outlined security objectives:

  • Authenticity, which includes integrity;
  • Authorization;
  • Availability;
  • Confidentiality; and
  • Secure and timely updatability and patchability.

It is expected that the premarket submissions would include information that describes how the above security objectives are addressed by and integrated into the device design.

The FDA guidelines clearly state that:
“While software development and cybersecurity are closely related disciplines, cybersecurity controls require testing beyond standard software verification and validation activities to demonstrate the effectiveness of the controls in a proper security context to therefore demonstrate that the device has a reasonable assurance of safety and effectiveness.”

It is indeed expected that the security testing documentation should be submitted in the premarket submission and include:

  • Verification of the implemented security requirements
  • Effectiveness and adequacy of each cybersecurity risk control
  • Vulnerability testing, and
  • Penetration testing

Medical device security plays a crucial role in safeguarding people using these products and solutions as well as the healthcare organization. Specification of security requirements and provision of guidance to device manufacturers is welcome and very useful. However, it is important to have in mind that most medical devices are used in various geographic regions and must comply with multiple national regulations. FDA regulates medical devices in the U.S., whereas the European Medical Device Regulation (MDR) (EU) 2017/745 that entered into force in 2021 is applicable to any manufacturer seeking to market their medical devices in Europe. Therefore, harmonization of standards and requirements is very much welcome by the device manufacturers. On the other hand, there are initiatives to establish global standards and global certification schemes to assess cybersecurity of the medical devices. 

The Institute of Electrical and Electronics Engineers (IEEE) has established a Medical Cybersecurity Certification Program that has been developed by the IEEE 2621 Conformity Assessment Committee (CAC), composed of stakeholders such as manufacturers, clinicians, FDA, test laboratories, cybersecurity solutions providers, and industry associations from around the world. The IEEE certification program is already applied to diabetes medical devices, and it will be extended to other devices. It provides:

  • Insights and adherence based on global, consensus-based industry standards
  • Knowledge of FDA submission criteria
  • Adherence to best practices
  • Identifying ways to mitigate cyber attacks

atsec is proud to be recognized as the first IEEE-recognized testing laboratory with a primary location in Stockholm, Sweden, and secondary locations in Munich, Germany, and Austin, Texas, U.S. The very first IEEE 2621 assessments of the medical devices are ongoing and planned to be finalized in Q4 2023.

“Over the years, atsec has been closely monitoring the development of security requirements in the medical device industry. When approached to become a lab for IEEE 2621, we enthusiastically. embraced the opportunity”,  said Salvatore La Petra, President and Co-Founder of atsec information security.

If you are interested in performing evaluation of your medical device or have any questions regarding our evaluation services, please do not hesitate to contact us ( We look forward to working with you.

IEEE corporate advisory group (CAG) members visiting atsec AB in Stockholm, Sweden, earlier in 2023.

Wednesday, September 20, 2023

The 11th International Cryptographic Module Conference

The 11th International Cryptographic Module Conference (ICMC) started today. This year the conference is held from September 20th to 22nd 2023 at the Shaw Center in Ottawa Canada.

The conference itself kicked off with Yi Mao, CEO of atsec US, giving the opening speech. It featured our latest animation, which has become somewhat of a tradition for us:


Marcos Portnoi, Laboratory Director for atsec information security, wrote the welcome letter in this year's program:

Dear Participants,
It is my utmost honor to welcome you to the International Cryptographic Module Conference 2023 in the beautiful city of Ottawa. This edition of ICMC marks the 10th anniversary of the conference, which was launched by atsec in September 2013 with the goal of bringing the community involved in cryptographic modules together.
This year, we have over 90 presentations in nine tracks, including Certification Programs, Post-Quantum Cryptography, Random Bit Generators and Entropy, Crypto Technology, Payment Card Industry, Embedded/IoT, and Open-Source Cryptography. In this city founded on the power of trading and eventually becoming a technology hub, we will trade ideas and knowledge among members from industry, laboratories, government, academia: all of us who love a good talk about cryptography. And of course, since any sort of product this year needs to have a checkbox for Artificial Intelligence (AI)--and I mean any sort of product, from search engines to progressive corrective lenses to mattress softness regulators, and extra credit is granted if one can fit "generative AI" in the description--we also want to talk about it and we are bringing a panel with experts in AI to discuss the (hopefully concrete) applications of AI for our work.
It is always interesting to witness the evolution in the conference among the years. Last year at the ICMC for instance, the Entropy Source Validation (ESV) was newly launched and Stateful Hash-Based Signatures (HBS), even though approved, had incipient implementations, still not testable via ACVP. Today, the ESV is at full throttle and HBS has one of its algorithms, LMS, now fully testable via ACVP and recently awarded a CAVP algorithm certificate.
One topic that particularly excites me is Post-Quantum Cryptography (PQC), and this year's ICMC has many interesting presentations in the track dedicated to PQC. The understanding and use cases of HBS have evolved substantially and the industry is better equipped to propose optimizations in the form through which those alluring yet brittle algorithms may be reviewed and evaluated for compliance with the standards. We see these initiatives in the very active Cryptographic Module User Forum (CMUF) working groups.
Again, my warmest welcome and may we enjoy our time!
Marcos Portnoi, PhD.
Laboratory  Director
atsec information security corporation

We are proud to work with the National Cybersecurity Center of Excellence (NCCoE) on shortening the FIPS queue through automation. Visit our colleagues at booth 200/202 to learn more.

This year, atsec colleagues are part of panel discussions and will be presenting on a variety of topics:

  • EFP/EFT Testing at Security Level 3 and 4 and Remote Testing Advocacy
    Renaudt Nunez
  • Panel: Facing the Future: The Next ISO/IEC 19790
    Yi Mao, et al.
  • Panel: Testing and Assessment for Quantum Safe Cryptography
    Marcos Portnoi, et al.
  • Kyber and Dilithium Real Life Lessons
    Stephan Mueller
  • Equivalence Classes in AES
    David Cornwell
  • Filling the Gaps in FIPS Cryptography
    Joachim Vandermissen
  • Panel: Bringing Crypto Compliance and Validation Testing Objectives
    Together for FIPS 140-3

    Yi Mao, Stephan Mueller, et al.
  • Attestation and FIPS: Past, Present and Future
    Alessandro Fazio
  • Marcos Portnoi is also hosting the Post-Quantum Crypto track.

As always we are looking forward to interesting presentations, discussions and exchange between the vendors, labs, government entities and end users.

Monday, September 11, 2023

Artificial Intelligence in Evaluation, Validation, Testing and Certification

by Gerald Krummeck, atsec information security GmbH

Everybody seems to jump on the AI bandwagon these days, “enhancing” their products and services with “AI.” It sounds, however, a bit like the IoT hype from the last decade when your coffee machine desperately needed Internet access. This time, though, there’s also some Armageddon undertone, claiming that AI would make our jobs obsolete and completely transform all sorts of businesses, including ours.
So, it comes as no surprise that atsec gets asked by customers, government agencies, and almost  everybody communicating with us how we position ourselves on the use of AI in our work and how we deal with AI being used in the IT security environment of our customers and in all sorts of other areas as well.
First answer: Unfortunately, we don’t yet use it for authoring blog entries, so musing about the benefits and drawbacks of AI in our work still can ruin your weekend. 🙁
Second answer: For an excellent overview of how we deal with AI and what we expect from this technology, there is a brilliant interview with Rasma Araby,  Managing Director  of atsec AB Sweden:

Of course, AI is discussed within atsec frequently, as we are a tech company by nature. We analyze IT technologies for impacts on IT security and are eager to deploy new technologies for ourselves or introduce them to our customers if we believe they will be beneficial.

atsec’s AI policy foundation

Recently, we defined some basic policies on the use of AI within atsec. Those policies have two cornerstones:

First and foremost, we are committed to protecting  all sensitive information we deal with, especially any information entrusted to us by our customers. We will not share such information and data with third parties and thus will not supply any such information in publicly available AI tools.
There are several reasons for this: Obviously, we would violate our NDAs with our customers if we send their information to a public server. Also, there is currently no robust way to establish trust in these tools, and nobody could tell you how such information would be dealt with. So, we must assume that we would push that information directly into the public domain. Even if we tried to “sanitize” some of the information, I would be skeptical that an AI engine would not be able to determine which customer and product our chat was about. The only way to find out would be to risk disaster, and we’re not in for that. Furthermore, sanitizing the information would probably require more effort than writing up the information ourselves.

The second cornerstone is not different from our use of any other technology: any technology is only a tool supporting  our work. It won’t take any responsibility for the results.
We are using many tools to help our work, for example, to help us author our evaluation reports and to keep track of our work results, evidence database, etc.  Such tools could be marketed easily as AI, but as the saying goes: “A fool with a tool is still a fool.” Our evaluators take responsibility for their work products, and our quality assurance will not accept errors being blamed on a tool. Tools are always treated with a good dose of mistrust. We always have humans to verify that our reports are correct and to assume responsibility for their contents. This will not be different with an  AI tool. At atsec, our evaluators and testers  will always be in ultimate control of our work.
With this framework, we are in a good position to embrace AI tools where they make sense and do not violate our policies. We are aware that we cannot completely avoid AI anyway, for example, when it “creeps” into standard software tools like word processors. AI-based tools helping our techies to re-phrase their texts for readability and better understanding might sometimes  be an improvement cherished by our customers. 😀
We expect AI tools to help, for example, with code reviews and defining meaningful penetration tests in the foreseeable future . However, we currently do not encounter such tools that could be run in controlled, isolated environments to fulfill our AI policy requirements.

Correctness of AI, trust in AI engines

As already stated, we do not treat current AI engines as trusted tools we can blindly rely upon. This is based on the fact that the “intelligence” displayed in the communication by these engines comes mostly from their vast input, which is absorbed into a massive network with billions, even trillions of nodes. Most of the large language models used in the popular AI engines are fed by the Common Crawl database of Internet contents (refined into Google’s Colossal Clean Crawled Corpus), which increases by about 20 terabytes per month. This implies that input for the training of the engines cannot be fully curated (i.e., fact-checked) by humans, and it leaves lots of loopholes to inject disinformation into the models. I  guess that every troll farm on the planet is busy doing exactly that.
The developers of these AI engines try to fight this, but filtering out documents containing “dirty naughty obscene and otherwise bad words” won’t do the trick. If your favorite AI engine doesn’t have quotes from Leslie Nielsen’s “The Naked Gun” handy, that’s probably why. Checking the AI’s “Ground Truths” against Wikipedia has its shortcomings, too.
Therefore, the AI engine companies use different benchmarks to test the AI engine output, with many of those outputs checked by humans. However, the work conditions of those “clickworkers” are often at a sweatshop level, which does not help to establish our trust in the accuracy and truthfulness of the results.
Therefore, if atsec would use such engines in its core business of assessing IT products and technology, we would not be able to put a reasonable amount of trust in the output obtained from these engines and it would require us to fact-check each statement made by the AI. This might easily result in more effort than writing the reports ourselves and trusting our own judgment.
Note that the accuracy of AI answers being between 60 and 80 percent depending on the subject tested in the benchmarks, together with the problems of poisoning the input, how to establish “truthfulness” of the AI, and ethical and philosophical questions about which information to provide are topics in the EU and US efforts to regulate and possibly certify AI engines. Unfortunately, while the problems are well known, their solutions are mostly not. AI researchers across the globe are busily working on those subjects, but my guess is that those issues may be intrinsic to today’s large language models and cannot be solved in the near future.

Offensive AI

A common Armageddon scenario pushed by AI skeptics is that big AI engines like the ones from OpenAI, Microsoft, Google, Meta, and others will help the evil guys  find vulnerabilities and mount attacks against IT infrastructures much easier than ever. After almost 40 years in IT security, that doesn’t scare me anymore. IT security has been an arms race between the good and bad guys from the very beginning, with the bad guys having an advantage as they only need to find one hole in a product, while the good guys have the task of plugging all holes.

As history teaches us, the tools used by the bad guys can and will be used by the good guys too. Tools searching for flaws have been used by hackers and developers alike, although developers were at times more reluctant to adopt them. AI will be no different, and maybe it will help developers to write more robust code, for example, by taking on the tedious tasks of thorough input and error checking, which are still among the most prominent causes of software flaws. Will atsec deploy those tools as well for their evaluations and testing? While we will certainly familiarize ourselves with those tools and might add them to our arsenal, it will be much more beneficial for developers to integrate those tools in their development and test processes,  subjecting all of their code to that scrutiny as soon as the code is written or modified, rather than having a lab like atsec deploying those tools when the product may already be in use by customers.
We have always advocated, in standards bodies and other organizations creating security criteria, that the search for flaws should be conducted within the developer’s processes and that the lab should verify that these searches for flaws and vulnerabilities are performed effectively in the development environment. This is also true for AI tools.


The hype about AI tools that started with the public availability of ChatGPT less than a year ago has already reached its “Peak of Inflated Expectations” (according to Gartner’s “hype cycle” model) and is on its way to the “Trough of Disillusionment.” The yet-to-come “Slope of Enlightenment” will lead to the “Plateau of Productivity,” when we finally have robust AI tools at our disposal, hopefully, combined with a certification that provides sufficient trust for their efficient deployment. In any case, atsec will monitor the development closely and offer to participate in the standardization and certification efforts. AI will become an integral part of our lives, and atsec is committed to helping make this experience as secure as possible.

Friday, August 18, 2023

Entropy Source Validation (ESV) Certificate Issued for the Intel DRNG

by Marcos Portnoi

Recently the CMVP has granted ESV certificate #E57 to the Intel DRNG entropy source. The testing and submission was done by atsec and it marks the first ESV certificate granted to the Intel DRNG.

The Intel DRNG (Digital Random Number Generator) is a hardware Random Bit Generator (RBG) integrated into a multitude of Intel processors, and offers both an entropy source and an SP800-90A DRBG to users of the processors. The DRNG is commonly accessed through the well-known RDRAND and RDSEED processor instructions. There is massive use of those instructions, such as in the Linux kernel, and the ESV certificate is a key step in facilitating the use of the entropy source in FIPS 140-3 validated modules.

Intel Corporation commented: "Today's US Government Cyber Security standards are highly complex. With the increasingly critical urgency for better security for cryptographic products comes the need for greater technical expertise along with the ability to navigate government standards. Despite extremely complex designs, atsec collaborated with Intel Corporation to obtain Intel's first Entropy Source Validation certificate which can be viewed on the NIST website."

The design of the Intel DRNG includes compliance with SP800-90A, SP800-90B and the upcoming new version of SP800-90C. 

The ESV certificate covers the components compliant with SP800-90B. The ESV program rolled out in April 2022 and facilitates validation through two key points: confering a certificate exclusively for the entropy source, allowing for the reuse of validated entropy sources by multiple module validations; and facilitating the validation process by providing an automated process and protocol, similar to the Automated Cryptographic Validation Protocol (ACVP). The CMVP has been reviewing the ESV submissions in a relatively quick cycle of about 6 weeks, including submission, review, comments and certification. The talented technical personnel of the CMVP are engaged in the review process, producing interesting comments, and in the dynamic evolution of the ESV program. 

 The certificate is available at

Thursday, July 20, 2023

First Post-Quantum Algorithm Certificate issued by CAVP

By Joachim Vandersmissen


On July 14, atsec obtained the first validation certificate for a post-quantum cryptographic algorithm: A4204. We used the Automated Cryptographic Validation Protocol (ACVP) to verify the correctness of the LMS (Leighton-Micali Signature) key pair generation, signature generation, and signature verification implementations in the QASM Hardware Security Module, developed by Crypto4A Technologies. This milestone represents an important step in the ongoing transition from traditional public-key cryptography to quantum-resistant algorithms.

The LMS scheme is a digital signature scheme based on secure hash functions and Merkle trees. It was first published in 1995 by Leighton and Micali and is based on the famous one-time signature scheme proposed by Lamport in 1979. Among quantum-resistant public-key algorithms, these schemes provide a distinct advantage: the post-quantum security of the scheme relies exclusively on the security of the underlying hash function. No other number-theoretic hardness assumptions are required. This is very attractive, as all modern hash functions with an output size of more than 256 bits are believed to be quantum safe. However, there are also drawbacks. In particular, LMS is a stateful signature scheme. In other words, it maintains an internal state that must be protected in hardware. Still, the benefits of hash-based cryptography significantly outweigh the costs, which lead to LMS (and XMSS) being standardized by NIST in 2020 (SP 800-208).

In May 2022, SP 800-208 was added to SP 800-140C, which specifies the algorithms approved for usage in FIPS 140-3 cryptographic modules. This also paved the way for LMS and XMSS to be tested by the Cryptographic Algorithm Validation Program (CAVP). The CAVP verifies the correct implementation of cryptographic algorithms and their components. Initial support for the LMS scheme was added in March 2023 and made available for production usage in April. We would like to thank the CAVP for their diligent work and excellent support to make this achievement possible.

Quantum computers represent a significant risk to classical public-key cryptography, a risk which cannot be ignored. Last year, the Commercial National Security Algorithm (CNSA) Suite 2.0 was published, which envisions a complete transition to post-quantum cryptography by 2033. We applaud Crypto4A Technologies for its proactive approach to offer this quantum-resistant signature scheme to its customers well ahead of the deadline.