Master the unending software development lifecycle of connected systems – or it will master you Back

If you’re developing safety-critical software, you understand the importance of bi-directional requirements traceability to ensure that the design reflects the requirements, that the software implementation reflects the design and that the test processes confirm the correct implementation of that software. You also know how painful requirements changes can be because of the need to identify changing code and any testing to be repeated.

Until now, that cycle has concluded with product release. Sure, there might be tweaks in response to field conditions but the business of development was essentially over. Then came the connected car, the Industrial Internet of Things (IIoT) and the remote monitoring of medical devices. For these and other connected systems, requirements don’t just change in an orderly manner during development. They change without warning, whenever a bad actor finds a new vulnerability or develops a new hack. And requirements keep on changing not just through the lifetime of the product, but as for long as it is in the field.

Whenever changes become necessary, revised code needs to be reanalyzed statically and all impacted unit and integration tests need to be re-run (regression tested). In an isolated application, the time to support such occurrences lasts little longer than the time the product is under development. But connectivity demands the ability to respond to vulnerabilities identified in the field. Each newly discovered vulnerability implies a changed or new requirement, and one to which an immediate response is needed—even though the system itself may not have been touched by development engineers for quite some time. In such circumstances, being able to isolate and automatically test only the functions impacted becomes much more significant.

This changes the significance and emphasis of product maintenance and adds new importance to automated requirements traceability tools and techniques. By linking requirements, code, static and dynamic analysis results and unit- and system-level tests, the entire software development cycle becomes traceable, making it easy for teams to identify problems and implement solutions faster and more cost effectively—even after product release. This offers developers a vital competitive advantage when the dreaded message “we’ve been hacked” arrives.

Process Objectives and Phases

We’ll use the ISO 26262 automotive functional safety standard as an example, but the same principles apply to other safety-critical industries and standards such as DO-178C, IEC 61508 or IEC 62304. Although terminology varies, a consistent element is the practice of allocating technical safety requirements in the system design specification and developing that design further to derive an item integration and testing plan. It applies to all aspects of the system, with the explicit subdivision of hardware and software development practices being dealt with as the lifecycle progresses. The relationship between the standard and the software specific sub-phases can be represented in a V-model (see Figure 1).

Figure 1 – Software-development V-model with cross-references to ISO 26262 and standard development tools. 

System Design

The products of the system-wide design phase can include CAD drawings, spreadsheets, textual documents and many other artifacts, produced using a range of tools. This phase also sees the technical safety requirements refined and allocated to hardware and software. Maintaining traceability between these requirements and the products of subsequent phases generally causes a project management headache.

The ideal tools for requirements management can range from a simple spreadsheet or Microsoft Word document to purpose-designed requirements management tools such as IBM Rational DOORS Next Generation or Siemens Polarion REQUIREMENTS. The selection of the appropriate tools will help in the maintenance of bi-directional traceability between phases of development.

Specification of Software Safety Requirements 

This sub-phase focuses on the specification of software safety requirements to support the subsequent design phases, bearing in mind any constraints imposed by the hardware. It provides the interface between the product-wide system design standard and software-specific requirements and details the process of evolution of lower level, software-related requirements. It will likely continue to leverage the requirements management tools used in the system design phase.

Software Architectural Design

There are many tools available to generate the software architectural design, including MathWorks Simulink, IBM Rational Rhapsody and ANSYS SCADE. Static analysis tools help verify the design by means of control and data flow analysis of the code derived from it, providing graphical representations of the relationship between code components for comparison with the intended design (see Figure 2).

Figure 2 – Graphical representation of control and data flow as depicted in the LDRA tool suite. 

Software Unit Design and Implementation

The illustration in Figure 3 is a typical example of a table from ISO 26262-6:2011. It shows the coding and modelling guidelines to be enforced during implementation, superimposed with an indication of where compliance can be confirmed using automated tools.

Figure 3 – Mapping the capabilities of the LDRA tool suite to “Table 6: Methods for the verification of the software architectural design” specified by ISO 26262-6.

These guidelines combine to make the resulting code more reliable, less prone to error, easier to test and easier to maintain. Peer reviews represent a traditional approach to enforcing adherence to guidelines, and although they still have an important part to play, automating these tedious checks is far more efficient, repeatable and demonstrable, and is less error-prone. There are many sets of coding guidelines available, such as MISRA, in-house sets, or adaptations to a standard set to make it more appropriate for a particular application (Figure 4).

Figure 4 – Highlighting violated coding guidelines in the LDRA tool suite. 

Establishing appropriate project guidelines for coding, architectural design and unit implementation are three discrete tasks but software developers responsible for implementing the design need to be mindful of them all concurrently. The guidelines relating to software architectural design and unit implementation are founded on the notion that they make the resulting code more reliable, less error-prone, easier to test and easier to maintain.

Figure 5 – Output from control and data coupling analysis as represented in the LDRA tool suite. 

Static analysis tools can provide metrics to ensure compliance with the standard, such as complexity metrics as a product of interface analysis, cohesion metrics evaluated through data object analysis and coupling metrics via data and control coupling analysis (Figure 5). Static analysis can also ensure that the practices required by standards are adhered to whether they are coding rules, design principles or software architectural design principles. In practice, the role of such a tool often evolves from a mechanism for highlighting violations to a means to confirm that there are none.

Software Unit Testing and Software Integration and Testing

Just as static analysis techniques (involving an automated “inspection” of the source code) are applicable across the sub-phases of coding, architectural design and unit implementation, dynamic analysis techniques (involving the execution of some or all of the code) are applicable to unit, integration and system testing. Unit testing focuses on particular software procedures or functions in isolation, whereas integration testing ensures that safety and functional requirements are met when units are working together in accordance with the software architectural design.

ISO 26262-6:2011 tables list techniques and metrics for performing unit and integration tests on target hardware to ensure that the safety and functional requirements are met and software interfaces are verified at the unit and integration levels. Fault injection and resource tests further prove robustness and resilience and, where applicable, back-to-back testing of model and code helps to prove the correct interpretation of the design.

Artifacts associated with these techniques provide both reference for their management and evidence of their completion. They include the software unit design specification, test procedures, verification plan and verification specification. On completing each test procedure, pass/fail results are reported and compliance with requirements are verified appropriately.

The example in Figure 6 shows how the software interface is exposed at the function scope, allowing the user to enter inputs and expected outputs to form the basis of a test harness. The harness is then compiled and executed on the target hardware and actual and expected outputs compared.

Figure 6 – Performing requirement-based unit testing using the LDRA tool suite. 

Unit tests become integration tests as units are introduced as part of a call tree, rather than being stubbed. Exactly the same test data can be used to validate the code in both cases. Boundary values can be analyzed by automatically generating a series of unit test cases, complete with associated input data. An extension to that facility provides for the definition of equivalence boundary values such as the minimum value, value below lower partition value, lower partition value, upper partition value and value above upper partition boundary.

Should changes become necessary—as a result of a failed test or in response to a requirement change from a customer—then all impacted unit and integration tests would need to be re-run (regression tested). Automatically re-applying those tests through the tool ensures that the changes do not compromise any established functionality.

In addition to showing that the software functions correctly, dynamic analysis also generates structural coverage metrics. These metrics provide the necessary data to evaluate the completeness of test cases and to demonstrate that there is no unintended functionality (Figure 7).

Figure 7 – Examples of representations of structural coverage within the LDRA tool suite. 

Metrics may include functional, call, statement, branch and MC/DC coverage. Unit and system test facilities can operate in tandem, so that coverage data can be generated for most of the source code through a dynamic system test and then be complemented using unit tests. This will exercise, for example, any defensive constructs that are inaccessible during normal system operation.

Bi-Directional Traceability

Bi-directional traceability requires each development phase to accurately reflect the one before it. In theory, if the exact sequence of the V-model is adhered to, then the requirements will never change and tests will never throw up a problem. But life’s not like that.

What happens if there is a code change in response to a failed integration test, perhaps because the requirements are inconsistent or there is a coding error? What other software units were dependent on the modified code? Such scenarios can quickly lead to situations where the traceability between the products of software development falls down.

Software unit design can take many forms from a natural-language detailed design document to model-based. Either way, these design elements need to be bi-directionally traceable to both software safety requirements and the software architecture. The software units must then be implemented as specified and then be traceable to their design specification.

Automated requirements traceability tools establish associations between requirements and tests cases of different scopes, which allows test coverage to be assessed (Figure 8). The impact of failed test cases can be assessed and addressed, as can the impact in requirements changes and gaps in requirements coverage. And artifacts such as traceability matrices can be automatically generated to present evidence of standards compliance.

Figure 8 – Establishing associations between requirements and test cases, as shown as the “Uniview” in the TBmanager component of the LDRA tool suite. 

Initial structural coverage is usually accrued as part of this process, from the execution of functional tests on instrumented code leaving unexecuted portions of code which require further analysis. That results in the addition or modification of test cases, changes to requirements and the removal of dead code. Typically, an iterative sequence of review, correct and analyze ensures that design specifications are satisfied.

Automate Functional Safety During the Development Lifecycle and Beyond

Although functional safety standards make significant contributions to both safety and security, there is no doubt that they also bring overhead. The application of automated tools throughout the development lifecycle can help considerably to minimize that overhead, while removing much of the potential for human error from the process.

Never has that been more significant than today. Connectivity changes the notion of the development process ending when a product is launched. Whenever a new vulnerability is discovered in the field, there is a resulting change of requirement to cater for it. Responding to those requirements places new emphasis on the need for an automated solution, both during the development lifecycle and beyond.