A Fresh Take on DO-178C Software Reviews

by Olivier Appéré Josue Bello – Apr 16, 2024 – AdaCore

DO-178C/ED-12C, officially titled “Software Considerations in Airborne Systems and Equipment Certification” is a certification standard that governs the acceptability of software in commercial aircraft. As stated in Section 1.1,

The purpose of this document is to provide guidance for the production of software for airborne systems and equipment that performs its intended function with a level of confidence in safety that complies with airworthiness requirements.

Guidance has a technical meaning in the standard and consists of:

Objectives associated with software life cycle processes

  • Planning process
  • Development processes (requirements, design, coding, integration)
  • Integral processes (verification, configuration management, QA, certification liaison)
  • Recommended activities that satisfy these objectives (DO-178 provides guidelines and best practices to comply with the software life cycle objectives however it also encourages users to adapt these methods to their specific use cases).
  • Software life cycle data: evidence (“artifacts”) verifying that activities have been accomplished successfully / objectives have been met

A software level establishes which guidance applies, and how rigorously. It is based on the effect of software anomalies on airworthiness/pilot workload and is determined during safety assessment (a system life cycle process). The software levels addressed in DO-178C/ED-12C range from D (anomalous behavior can lead to a minor failure condition) to A (anomalous behavior can lead to a catastrophic failure).

A major focus of DO-178C/ED-12C is on the verification process, where verification means the technical assessment of the outputs of a software life cycle process. Verification consists of reviews, analyses, and tests, with the objective of detecting and reporting errors that may have been introduced during software development processes.

As stated in Section 6.3 of DO-178C/ED-12C:

“… reviews provide a qualitative assessment of correctness. A review may consist of an inspection of an output of a process guided by a checklist or similar aid.”

A review is an evaluation of software life cycle data such as software requirements, software architecture, source code, executable object code, and tests produced by a software development compliant with DO-178C/ED-12C objectives:

  • §5.0 Software Development Processes
  • §6.4 Software Testing

Review is an essential part of the evidence required for certification. More specifically, reviews provide compliance with the following sections of DO-178C/ED-12C:

  • §6.3 “Software Reviews and Analyses” and
  • §6.4.5 “Reviews and Analyses of Test Cases, Procedures, and Results”).

The DO-178C/ED-12C sections below define further “Review” activities but are separate from the software verification process. Consequently, they are outside the scope of this blog post.

  • §4.6 “Review of the Software Planning Process
  • §7.2.5 “Change Review
  • §8.3 “Software Conformity Review

This is the first in a series of blog posts in which we explore how to define and leverage a review framework for efficient review in a DO-178C/ED-12C context.

The prerequisite steps required before implementing a review framework with any tool are the following

  1. First, capture thoroughly all the review-related constraints specified or implied by DO-178C/ED-12C, and then
  2. Define a minimum acceptable review framework consisting of checklists, authorization, reviewer roles, etc. compliant with DO-178C/ED-12C, and finally
  3. Set up a configuration management repository such as GitLab to track development activities and extract lifecycle data (e.g., setting up projects, repositories, Continuous Integration automation, etc.). GitLab CI (Continuous Integration) is a DevOps tool integrated into GitLab for automating the build, test, and deployment processes of software development projects.

In this blog, we’ll identify the constraints (records with evidence independence, configuration items reviewed, etc.) associated with meeting the review objectives of DO-178C/ED-12C most efficiently. Subsequent blog posts will address the other two items.

WHY – Review

Why do reviews? Sure, it is required by DO-178C/ED-12C, but the primary purpose of reviews is to identify and mitigate potential software defects, errors, and inconsistencies early in the development process. Especially when the activity is not practical to be performed by an automated (qualified) tool.

The software review activity in DO-178C/ED-12C encompasses several key aspects:

  1. Use of checklists or similar aids to review technical and other life cycle data, as described in Section 6.3.
  2. Creation of Software Verification Results, which includes review records, and configuration management info for software reviewed, as per Section 11.14.
  3. A mechanism for efficient storage and retrieval of reviewed life cycle data, as indicated in Section 11.0(b).
  4. Establishing independent verification, as outlined in Sections 4.6 and 6.2. The review should be performed by individuals other than the developers, and evidence should trace outputs to their activities and inputs.
  5. Reviews of high-level requirements, low-level requirements, architecture, source code, outputs of the integration process, and software testing activities, which are necessary to ensure compliance with objectives outlined in Sections 6.3.1 to 6.4.5.
  6. Tracking of discrepancies found during the review process via problem reporting, as per Section 11.14.

To undertake a review in DO-178C/ED-12C the first task is to focus on the review and analysis objectives which are specific to 8 categories of software life cycle data.

The figure below gives an overview of the software life cycle data submitted for review in the software verification process:

Review of Software Life Cycle Data

A total of 31 review and analyses objectives are explicit in DO-178C/ED-12C section 6.3 “Software Reviews and Analyses” and section 6.4.5 “Reviews and Analyses of Test Cases, Procedures, and Results”.

And so the WHY of the review is to confirm that these objectives are satisfied.

You must also focus on the WHAT by choosing the most appropriate verification method.

WHAT- Verification Methods

Review vs Analysis

To apply the methodology presented in the previous sections, reviews or analyses must confirm that all software life cycle data, namely high-level requirements, low-level requirements, software architecture, source code, executable object code, test cases, test procedures, and test results combined, satisfy their objectives.

Analyses provide repeatable evidence of correctness, and reviews provide a qualitative assessment of correctness (DO-178C/ED-12C section 6.3).

One solution to reduce the effort of review is to consider augmenting or replacing the review with a formal analysis following DO-333/ED-216 “Formal Methods Supplement to DO-178C/ED-12C and DO-278A”.

Since formal methods can be used to discover errors early in the life cycle, they actually reduce the overall work required for the project.” (DO-333/ED-216 Section FM.B.1.4.4 Reduce Effort)

For instance the objective “Each high-level requirement is accurate, unambiguous, and sufficiently detailed” could be handled by formal analysis provided that the requirements are expressed in a formal notation.

“If the high-level requirements are expressed in a formal notation, then they will be precise and unambiguous. The formal model of high-level requirements can be checked for consistency (the absence of conflicts) and may enable accuracy checks to be carried out using formal analysis.” (DO-333/ED-216 Section 6.3.1 b).

Actually, on a classical software development basis where:

  • High-level and low-level requirements are written in natural language,
  • Software architecture is designed with no formal or semi-formal notation,
  • A code checker tool is used to verify coding rules (6.3.4-c Conformance to standard),
  • An advanced static code analyzer tool is used to ensure the correctness and consistency of the source code (6.3.4-e Accuracy and consistency),

Among the 31 DO-178 objectives, not less than 29 objectives are verified by review.

With a more formal notation approach, where:

  • High-level requirements are written in constrained natural language (i.e. EARS). ,
  • Low-level requirements are written in a formal notation (e.g., SPARK, SPARK is a language based on Ada. It adds contracts and extended annotations as part of a subset of regular Ada code): “It is more effective to write requirements formally than to write an informal one.” (DO-333/ED-216 Section FM.B.1.4.1 Improve Requirements),
  • The software architecture is designed with a semi-formal notation like UML,
  • A design verifier tool is used to verify the software architecture (e.g., Simulink Design Verifier, GNATprove (To verify correctness of data flow), etc.),
  • An automated documentation generation tool is used to generate traceability matrices,

we could dramatically reduce review objectives to be verified or at least reduce effort when using semi-formal notations. While it may still follow some conventions and rules, a semi-formal notation allows for more flexibility and interpretation. It is often used in contexts where absolute precision is not necessary or where there’s a need for a balance between formalism and readability. Here’s an example with the EARS syntax (EARS for Easy Approach to Requirements Syntax is a mechanism to gently constrain textual requirements. The EARS patterns provide structured guidance that enables authors to write high quality textual requirements):

When the input A is below 20 [amperes] for more than 10 [milliseconds], the function B shall set the output C to ACTIVE

To summarize, you can streamline the review process by automating evaluation as much as possible (i.e., replacing review with analysis) and/or by using formal or semi-formal notation.

The Review Methods

The DO-178C/ED-12C standard does not prescribe specific review methods to be followed.

It only mentions that “a review may consist of an inspection of an output of a process guided by a checklist or similar aid.” (6.3 Software Reviews and Analyses)

Instead, it specifies objectives and criteria such as correctness, accuracy, completeness, and verifiability, that must be satisfied.

As a result, organizations can choose their review methods as long as they meet the criteria specified in the document.

Two review methods are typically used, the walk-through and the inspection method. They share the common goal of identifying issues but the second approach requires more formalization.

The walk-through method is a form of peer review where the author of the artifact leads a group of peers through the artifact. The focus is on understanding the logic, structure, and functionality. It is more informal and aims to familiarize the team with the content.

The inspection method, popularized by Michael Fagan in the 1970s, is a more formal process where a designated moderator leads a team of reviewers through the artifact. It involves a thorough examination of the work product against predefined criteria or standards.


That being said, you have to pay attention to several points as described below. This will avoid falling into common traps that reduce efficiency and create tedious, high-effort, and low-added-value workloads.

The first three are constraints: configuration identification, independence, and availability.

The last three are to be considered as opportunities to reduce effort or ease management.

Configuration Identification

DO-178C/ED-12C requires the identification of configuration items reviewed.

This is crucial to

  • help establish this traceability throughout the development process. Traceability must include all levels of requirements, testing, reviews, defects, and the code itself.
  • ensure that the correct version of the configuration item is being reviewed.
  • ensure that developers can replicate the conditions under which the issue occurred and work towards a resolution.
  • ensure that documentation accurately reflects the status of the software at a given point in time. This makes it easier to demonstrate compliance during certification audits.


DO-178C/ED-12C places a significant emphasis on independence in the review activity for software at levels A or B. Independence is a fundamental principle aimed at ensuring objectivity, impartiality, and thorough scrutiny of software development and verification activities.

Independence means that the review is performed by a person(s) other than the developer of the software life cycle data being reviewed. A tool may be used to achieve equivalence to the human verification activity. (DO178-C Section 6.2-e).

16 out of 31 for software level A (7 out of 31 for level B, none for level C or D) of review objectives need to be satisfied with independence.

While independence in reviews is essential for achieving review objectives, it can introduce various consequences on the effort required:

  • Additional resources in terms of personnel
  • Scheduling challenges to coordinate independent reviews with people who have availability and expertise
  • Maintaining documentation and traceability records for independent reviews adds to the overall overhead

Availability of Review Records

The review records are evidence of compliance and may be requested by the certification authority. Consequently, there is no obligation to export review records from the database to create a deliverable document as long as the data and database itself remain available for audit if needed.

The Form of Documentation

The form of documentation can significantly influence the efficiency of reviews and analyses in compliance with DO-178C/ED-12C.

Well-structured documentation facilitates efficient reviews and ensures that reviewers can adequately assess compliance with requirements.

If documentation is produced with care, it will be useful for a long time. Conversely, if it is going to be extensively used, it is worth doing right.”, A Rational Design Process: How and Why to Fake It, Parnas, Clements, 1986

DO-178C/ED-12C focuses mainly on content, and not on layout and appearance. That’s why the documentation can take various forms and structures (i.e. a database).

To optimize the review’s effort, you should manage an adequate level of detail by applying the “less-is-more” rule, limiting details to only those that are essential, and making sure that reviewers quickly see the big picture.

Last but not least, to make it possible to automate a software requirements review, you have to define the rules for writing requirements so that they are expressed in a format that can be processed by a Natural Language Processing (NLP) tool.

The Software Component definition

The particular DO-178C/ED-12C review and analysis objectives related to software architecture and its projection into source code are as follows:

  • §6.3.3 b). This review objective is to verify the relationship, via data flow and control flow, between the components of the software architecture.
  • §6.3.4 b) This objective is to ensure that the Source Code matches the data flow and control flow defined in the software architecture.

The related workload could vary greatly depending on the definition of the component and whether or not a tool is used to define/verify data flow and control flow.

In DO-178C/ED-12C, components can vary in size and complexity, ranging from individual software modules or functions to larger software units such as subsystems or entire applications. Here are some properties based on DO-178C/ED-12C and DO-248C/ED-94C:

  • A software component is assigned a software level (DO-178C/ED-12C section 2.3). This means that a software component may be visible from a system perspective.
  • A software product (software intended for use in airborne applications) can be composed of several components. (DO-178C/ED-12C section 3.2)
  • Interfaces between software components, in the form of data flow and control flow, should be defined to be consistent between the components. (DO-178C/ED-12C section 5.2.2 d.)
  • If the interface is related to a component of a lower software level, it should also be confirmed that the higher software level component has appropriate protection mechanisms in place to protect itself from potential erroneous inputs from the lower software level component. (DO-178C/ED-12C section 6.3.3 b.)
  • A software component can be a subprogram (DO-248/ED-94C FAQ #67)

So organizations should be able to choose the scope of the software component that best suits their software development strategies.

Resource and Time Constraints

Although project management considerations are out of the scope of DO-178C/ED-12C, limited resources, including time and people, can constrain the scope and depth of reviews. Organizations must allocate resources effectively to prioritize critical review activities and ensure that reviews are conducted in a timely manner without compromising quality. These constraints explain why we often find proofreading comments with the properties “Status”, “Category” or “Severity”. There is another reason why categorizing proofreading comments is useful: the management of open problem reports as per AC 20-189 “Management of Open Problem Reports (OPRs)”, an advisory circular (AC) developed in coordination with EASA and FAA.

What’s Next?

In this post, we emphasized the constraints emerging from DO-178C/ED-12C reviews and analysis objectives and established a baseline to write a specification to describe a minimal viable review framework to comply with DO-178C/ED-12C.

In the next post, we will show how to define a minimal viable review framework to both comply with DO-178C and be as efficient as possible.

Posted in #Certification    #DO-178C    #ED-12C   

About Olivier Appéré

Olivier Appéré

Olivier is a Certification Engineer and joined AdaCore in 2022.

He has over two decades of dedicated experience in the field of aviation software certification and has a deep expertise in DO-178 software safety certification standards.

At AdaCore, he works on GNAT run-time libraries suitable for certifiable applications.