Digital transformation promises to deliver cleaner, greener, and more efficient technology solutions by bringing data-driven operations to the real world. Smart buildings that adapt to required usage, smart vehicles that optimize transit, and smart machines that sense their environment and adapt to optimize output are all within reach.
Digital twins are an essential part of this transition. Still, they need to operate securely and safely. They need an understandable and interoperable model for maintaining security and safety assurance that satisfies all stakeholders: technical, business, and regulatory if they are to be adopted at scale. Best practices for trustworthiness characteristics such as safety and security should be followed, but more is needed.
In a complex software- and data-driven environment, things can change rapidly. As a result of new vulnerabilities or a lack of maintenance, something that was once fit for purpose may no longer be. This raises a need for continuous assurance of meeting security and safety requirements while considering changes, even dynamic changes, in system composition or operating parameters.
Because cyber-physical systems have real-world consequences, all the trustworthiness characteristics (safety, security, privacy, resilience, and reliability) must be considered holistically. Many initiatives and standards for security exist, but they do not consider all of these characteristics that can result in losses. Similarly, safety standards for mechanical systems are mature and respected but are not necessarily able to address all concerns with dynamic and complex software-based systems. To successfully use digital twins, operators must have visibility and control over all five trustworthiness characteristics.
The trust vector is a model for these critical decisions.
Today’s safety and security landscape is mainly static and avoidance based. Typically, it relies on a list of known things not to do and control measures needed to support safety and security based on a concrete understanding of exactly how the system is composed, how it has operated in the past, and the static environment for which it is intended. This static approach is safe but inflexible, too inflexible to deal with the realities of today’s software-based and highly connected systems. The price paid for relative certainty at the design stage is the inability to move to new operating models or adapt to new environmental conditions during the much longer operational stage. To communicate, devices needed explicit, design-time integration and special code adaptations to speak each other’s protocols, making combinations static and favoring pre-existing relationships.
The advent of technologies that enable new, even ad hoc connections between systems provides more flexibility in system design and operation and gives access to more data from more sensors that can be invaluable in making good safety decisions. This requires an approach for certifying dynamically composed systems that is different from the static ‘proven-in-use’ approach.