While connected systems have resulted in new opportunities for easier monitoring, upgrading, and enhancement, they have also presented more vulnerable attack surfaces. Unfortunately, no single defense of a connected system can guarantee impenetrability. Fortunately, there are multiple levels of security to ensure that if one level fails, others stand guard.
These defense-in-depth approaches can include secure boot, which ensures the correct image loads; domain separation; multiple independent levels of security (MILS) design principles, such as least privilege; attack surface reduction; security-focused testing such as static and dynamic analysis, and last but not least, techniques for secure coding.
While secure application code does little to defend a connected embedded system if the underlying architecture is unsecure, it does has a key part to play in a system designed with security in mind.
Defense-in-depth and the V-model
Traditionally, the practice for secure code verification has been largely reactive. Code is developed by following somewhat loose guidelines and then subjected to performance, penetration, load, and functional testing to find vulnerabilities, which are fixed later.
A better, more proactive approach ensures code is secure by design—a “shift left” along the timeline. That implies a systematic development process, where the code is written in accordance with secure coding standards, is traceable to security requirements, and is tested to demonstrate compliance with those requirements as development progresses.
This proactive approach integrates security-related best practices into the V-model software development life cycle that is familiar to developers in the functional safety domain. The resulting Secure Software Development Life Cycle (SSDLC) represents a shift left for security-focused application developers and provides a practical approach to ensuring that vulnerabilities are designed out of the system or addressed in a timely and thorough manner.
The same principles can be applied to the DevOps lifecycle, resulting in what has become known as DevSecOps. Although the context differs between DevSecOps and the SSDLC, shift left therefore implies the same thing for both—that is, an early and ongoing consideration of security.
Test early and often
All the security-related tools, tests, and techniques described here have a place in each life cycle model. In the V model, they are largely analogous and complementary to the processes usually associated with functional safety application development (Figure 1).
Figure 1: Use of security test tools and techniques in the V-model based secure software development life cycle (SSDLC)
In the DevSecOps model, the DevOps life cycle is superimposed with security-related activities throughout the continuous development process (Figure 2).
Figure 2: Use of security test tools and techniques in the DevSecOps process model
Requirements traceability is maintained throughout the development process in the case of the V-model, and for each development iteration in the case of the DevSecOps model (shown in orange in each figure).
Some SAST (static) tools are used to confirm adherence to coding standards, ensure that complexity is kept to a minimum, and check that code is maintainable. Others are used to check for security vulnerabilities but only to the extent that such checks are possible on source code without the context of an execution environment.
White-box DAST (dynamic) enables compiled and executed code to be tested in the development environment or, better still, on the target hardware. Code coverage facilitates confirmation that all security and other requirements are fulfilled by the code, and that all code fulfils one or more requirements. These checks can even go to the level of object code if the criticality of the system requires it.
Robustness testing can be used within the unit test environment to help demonstrate that specific functions are resilient, whether in isolation on in the context of their call tree.
Fuzz and penetration black-box testing techniques traditionally associated with software security remain of considerable value, but in this context are used to confirm and demonstrate the robustness of a system designed and developed with a foundation of security.
Provide bidirectional traceability
The IEEE Standard Glossary of Software Engineering Terminology defines traceability as “the degree to which a relationship can be established between two or more products of the development process, especially products having a predecessor-successor or master-subordinate relationship to one another.” Bidirectional traceability means that traceability paths are maintained both forward and backward (Figure 3).
Automation makes it much easier to maintain traceability in a changing project environment.
Figure 3: Bidirectional traceability
Forward traceability demonstrates that all requirements are reflected at each stage of the development process, including implementation and test. The impact of any changes to requirements or of failed test cases can be assessed by applying impact analysis, which can then be addressed. The resulting implementation can then be retested to present evidence of continued adherence to the principles of bidirectional traceability.
Equally important is backward traceability, which highlights code that fulfills none of the specified requirements. Oversight, faulty logic, feature creep, and the insertion of malicious backdoor methods can all introduce security vulnerabilities or errors.
It is essential to remember that the life cycle of a secure embedded artifact continues until the last example in the field is no longer in use. Any compromise of such an artifact demands a response, a changed or new requirement, and one to which an immediate response is needed—often to source code that development engineers have not touched for a long time. In such circumstances, automated traceability can isolate what is needed and enable automatic testing of only the affected functions.
Shift left in practice
The concepts embraced by the shift-left principle are familiar to individuals and teams developing safety-critical applications. For many years, functional safety standards have demanded a similar approach. Consequently, many best practices proven in the functional safety domain apply to the security-critical applications previously discussed, including establishing functional and security requirements at the outset (V-model) or before each iteration (DevSecOps), testing early and often, and applying bidirectionally trace requirements to all stages of development.
November 15, 202
Mark Pitchford has more than 30 years of experience in software development for engineering applications. He has worked on many significant industrial and commercial projects in development and management, both in the UK and internationally. Since 2001, he has worked with development teams looking to achieve compliant software development in safety and security-critical environments, working with standards such as DO-178, IEC 61508, ISO 26262, IIRA and RAMI 4.0. Mark earned his bachelor of science degree at Trent University, Nottingham, and has been a chartered engineer for more than 20 years. He now works as technical specialist with LDRA.