Source code to object code traceability is a requirement in the most critical aerospace and space applications. Experience has revealed to us that if code has been shown to perform correctly, then the probability of failure in the field is considerably lower. The verification and validation practices championed by functional safety, security and coding standards (including IEC 61508, ISO 26262, IEC 62304, MISRA, CWE…) reflect that. But the focus is on the high level source code, with the implied assumption that the compiler will create executable object code that faithfully reproduces what the developers intended.
Object code is difficult to read, but the generated assembly code is easily legible. Because there must be a one-to-one relationship between the object code and assembly, achieving source to assembler code traceability also ensures object code traceability. Doing so addresses any doubt concerning the interpretation of developer intent.
The C and C++ programming languages are both compiled languages, meaning that programs are implemented by compilers which translate source code into machine-readable code. This process involves four steps.
It is inevitable that the control and data flow of object code will not be an exact mirror of the source code from which it was derived, and so proving that all source code paths can be exercised reliably does not prove the same thing of the object code.
Worse, and despite their undeniable value, source code unit tests can also be misleading because the object code derived from the compilation of a function wrapped in a unit test harness can differ markedly from that generated in the context of a complete system.
The aim of demonstrating source code to object code traceability is to address any doubt and uncertainty associated with any object code that does not correlate exactly with source code.
When source code is compiled, the flowgraph for the resulting assembler code is likely to be quite different to that for the source. The rules followed by C or C++ compilers permit them to modify the code in any way they like, provided the binary behaves as if it were the same.
From the C++ standard §1.9/1:
“This provision is sometimes called the “as-if” rule, because an implementation is free to disregard any requirement of this document as long as the result is as if the requirement had been obeyed, as far as can be determined from the observable behavior of the program. For instance, an actual implementation need not evaluate part of an expression if it can deduce that its value is not used and that no side effects affecting the observable behavior of the program are produced.”
It is useful to understand how and why the control flow structure of compiler-generated object code differs from that of the application source code from which it was derived by reference to some very simple C source code from an LDRA tool qualification pack.
This C code can be demonstrated to achieve 100% source code coverage by means of a single call thus:
When the code is compiled using any widely used commercially available compiler, the flowgraph is likely to look different to that of the source code. In this example, using a TI compiler with optimization disabled, the result is as shown below. The red elements of the flow graph below represent code that has not been exercised by the single function call:
By leveraging the one-to-one relationship between object code and assembler code, this mechanism exposes which parts of the object code are unexercised, prompting the tester to devise additional tests and achieve further assembler (and hence object code) coverage.
DO-178C § 6.4.4.2 is entitled “Structural Coverage Analysis”. The standard reasons that structural coverage is required are because it is the best way to ensure that requirements-based tests have completely exercised the application:
“The objective of this analysis is to determine which code structure was not exercised by the requirements-based test procedures. The requirements-based test cases may not have completely exercised the code structure, so structural coverage analysis is performed and additional verification produced to provide structural coverage.”
Paragraph 6.4.4.2b describes the additional requirements when the application is Level A:
“The structural coverage analysis may be performed on the Source Code, unless the software level is A and the compiler generates object code that is not directly traceable to Source Code statements. Then, additional verification should be performed on the object code to establish the correctness of such generated code sequences. A compiler-generated array-bound check in the object code is an example of object code that is not directly traceable to the Source Code.”
In other words, if the application is Level A and the compiler is generating object code that is not traceable to the source code, then additional verification of the object code must be performed.
The Certification Authorities Software Team, or CAST, were a group of civil aviation regulatory officials from Asia, Europe, North and South America, including the FAA and EASA. CAST position papers were for the clarification of issues that related to the formally established standards in that sector. They covered a range of topics, including the clarification of aspects of the standards that have proven ambiguous.
That was true of source code to object code traceability, and the aim of the (now defunct) 2002 FAA paper CAST-12, “Guidelines for Approving Source Code to Object Code Traceability” was to clarify it. CAST-12 made it clear that although the use of test tools for this purpose is an option, it is not obligatory. The guidelines outlined not only how a manual review might be achieved, but also how a sample application might be devised using typical constructs to optimize the time spent performing such a manual analysis.
That said, the use of automated test tools throughout the development lifecycle of critical applications aids thoroughness, repeatability, and efficiency. That is equally true of source code to object code traceability.
ECSS E ST 40C § 5.8.3.5 e states that for software of criticality category A, “In case the traceability between source code and object code cannot be verified… the supplier shall perform additional code coverage analysis on object code level”.
There is no obligation in DO-178C to automate source code to object code traceability, or indeed to achieve object code coverage. In fact, as for elsewhere in the standard – and, indeed, for almost all development work governed by functional safety standards across the sectors – there is no obligation to automate anything. The (now defunct) 2002 FAA paper CAST-12, “Guidelines for Approving Source Code to Object Code Traceability” made that very clear.
However, throughout the DO-178C and ECSS development processes, automation helps by making the validation and verification processes more thorough, more repeatable, and more efficient. Source code to object code traceability is no exception.
OCV provides a comprehensive approach to the demonstration of source code to object code traceability. The problem with taking a less thorough approach lies with the specifications that compiler developers are obliged to fulfil, coupled with the flexibility afforded to them based on the “as-if” principle.
Many of the practices that are adopted by developers working in safety-critical sectors are alien to compiler developers. Practices such as defensive coding and external data access are not part of a world the compiler recognizes. For example, neither C nor C++ make any allowances for memory corruption, so unless code designed to protect against it is accessible when there is no such corruption, it may be ignored when the code is optimized. Defensive code must be syntactically and semantically reachable if it is not to be “optimized away”. Manual review of code may well miss such issues. OCV will not.
Issues resulting from the mismatch between the remit of complier developers and the demands of functional safety might well go undetected without the use of OCV. Compilers used for highly critical software applications can generally be assumed to fulfil their design criteria in all but very unusual circumstances, but those criteria do not include functional safety considerations. Object code verification currently represents the most assured approach to bringing those considerations within that circle of trust.
The TBobjectbox component of the LDRA tool suite provides a complete Object Code Verification (OCV) solution, including source code to object code traceability and object code coverage analysis.
Video: Object Code Verification using Wind River Diab and Lauterbach TRACE32
Video: Object Code Verification using Texas Instruments Code Composer Studio and a TMS320
Video: Object Code Verification using Texas Instruments Code Composer Studio and a TMS570
LDRA Aerospace Resource Centre
LDRA Defence Resource Centre
Email: info@ldra.com
EMEA: +44 (0)151 649 9300
USA: +1 (855) 855 5372
INDIA: +91 80 4080 8707