Showing posts with label System Behaviour. Show all posts
Showing posts with label System Behaviour. Show all posts

Saturday, 20 June 2020

Reliable Models

The governed world is hard to model. It's not a formal system: nothing is constant; nothing is precise; no event is truly atomic; every assertion is contingent; every causal link is vulnerable; everything is connected to everything else. This is the right-hand-side problem: no formal model can be perfectly faithful to the reality. This matters because the designed system behaviour relies on model fidelity: deviation brings system failure. But in practice all is not lost. A single unified model is unattainable; but specific models can be adequately faithful, at some time, for some period, in some context, for some purpose. The question, then, is how to take best advantage of these local, transient and conditional episodes of adequate fidelity?

The question answers itself. The desired system behaviour is an assemblage of concurrent constituent behaviours, providing different functions in different circumstances and contexts, and imposing different demands on the governed world. This complex assemblage responds to the same variability of context, need and purpose that invalidates any single unified model. The answer to the question is clear: the model must be structured to match the behaviour structure.

This correspondence of model and behaviour structures motivates the idea of a triplet. For each simple behaviour, a triplet brings together (1) a machine program; (2) a governed world model of the specific properties on which the program depends; and (3) the governed world behaviour resulting from the interaction of (1) and (2). Concurrent constituent behaviours rely on the concurrent validity of their respective models. If two behaviours cannot overlap in time, their models need not be consistent. Each constituent model and its corresponding behaviour are designed together. Reliability of a constituent model is judged relative to the context and demands of its corresponding constituent behaviour.

This co-design of model and behaviour structure realises modelling-in-the-large. By developing each constituent model along with its associated machine program, it also focuses and simplifies the task of modelling-in-the-small. Each constituent model is small because it is limited to just those domains and causal links that participate in the corresponding behaviour. It is simple because its behaviour satisfies the criteria of triplet simplicity. For example, a governed world domain may play only one participant role in the behaviour; modelled causal properties are not themselves mutable; the operational principle of the behaviour must be simply structured. The simplicity and small size of the model also enable and guide a deeper and more careful—and carefully documented—investigation of the residual risks to reliability.

Later, when constituent behaviours are to be combined in a bottom-up assembly of system behaviour, their respective models are ready to hand, their vulnerabilities to interference already examined. The combination task is easier. Only those models are to be reconciled and combined whose associated behaviours can be concurrently enacted; difficulty in reconciling two models may rule out concurrent enactment. For example, cruise control and self-parking cannot be enacted concurrently: their functional combination makes no behavioural sense, and—not coincidentally—their governed world models are irreconcilable.

Links to other posts:
 ↑ The Right-Hand Side: Why the model-reality relationship is problematic
 ↑ System Behaviour Complexity: The component structure of system behaviour
 ← Triplets: Triplets (Machine+World=Behaviour) are system behaviour elements

Sunday, 2 February 2020

Behaviour Is the Core and Focus

David Harel and Amir Pnueli [1] confidently asserted that "A natural, comprehensive, and understandable description of the behavioural aspects of a system is a must in all stages of the system’s development cycle, and, for that matter, after it is completed too." Even this confident assertion is an understatement. System behaviour is more than a must in CPS development: it is the very heart and focus of the system, and the essential product of its development.

Developing dependable system behaviour is difficult. The essence of the task is programming the bipartite system—the quasi-formal machine, and the decidedly non-formal physical world. Making a unified program for this heterogeneous combination is hard. It cannot be achieved by separating the two parts at a formal specification interface, because neither part can be understood without the other.

System behaviour is where the software engineers meet the stakeholders. What is the only thing the machine in a system can do? It can govern behaviour in the world. What can ensure satisfaction of the most important stakeholder requirements? The governed behaviour.

Success in software engineering for a system is success in behaviour development. Desired and dependable governed behaviour is the crucial criterion of success. An occurrence of undesired and unexpected behaviour is an instance of system failure.

Where is the chief complexity in a cyber-physical system? In the physical world interactions of the constituent behaviours that make up the whole system behaviour.

Why must software engineers develop formal models of the non-formal physical world? Because those models must guide and justify the design of the machine's software that will evoke the system behaviour.

Developing system behaviour is a prime application of the principle of incremental complication. The elementary components from which the system behaviour is constructed are triplets. A triplet is a simple machine with its governed world, and the behaviour they evoke. Triplets that can execute concurrently must be reconciled to eliminate mutual conflict, sometimes being modified from their initial simple form. In a further design stage the control of concurrency is introduced to manage contingencies such as the need for pre-emptive termination of a behaviour.

[1] David Harel and Amir Pnueli; On the development of reactive systems; in K R Apt ed, Logics and models of concurrent systems, pages 477-498; Springer-Verlag New York, 1985.

Links to other posts:
 ↑ Cyber-Physical Systems:  The essential character of a cyber-physical system
 ↓ System Behaviour Complexity:  The component structure of system behaviour
 → Triplets:  Triplets are system behaviour elements: (Machine+World=Behaviour)

Wednesday, 8 January 2020

System Behaviour Complexity

Your car is a cyber-physical system. You’re driving on the highway. The engine is powering the wheels through the drive train and the car is responding to the controls: that’s the core automotive behaviour. Cruise control is active; air conditioning is cooling the cabin; the lane departure warning feature is monitoring the car’s position in the lane; anti-skid braking is ready to avoid wheel lock in an emergency stop; active suspension is smoothing out bumps and maintaining safe attitude in acceleration and cornering. These are concurrent constituent behaviours of the complex system behaviour.

Other constituent behaviours are not active right now: the car can park itself automatically; a speed limiting feature can override reckless driving; stop-start can save fuel, turning off the engine when the car halts and restarting automatically when the driver wants to move away; each regular driver’s chosen settings of seat, wheel, and mirror positions can be saved, and restored automatically when the driver’s personal ignition key is used on another occasion. These too are constituent behaviours, available to contribute to the complex behaviour of the whole system.

Your car is just one vivid illustration: complex system behaviour structured from simpler behaviour constituents is characteristic of realistic cyber-physical systems—aircraft, process plants, railway interlocking, even medical infusion pumps. In these complex behaviours dependability is vital—and demands structure.

For any structure two concerns are basic. First: how should we choose and construct the parts? Second: how should we design the connections between them? In a famous 1975 paper [1], Frank DeRemer and Hans Kron argued that for a large program of many modules programming-in-the-small and programming-in-the-large are distinct intellectual activities. For developing complex system behaviour the same argument is compelling. Behaviour-design-in-the-small identifies and develops individual constituent behaviours. Behaviour-design-in-the-large structures their relationships and interactions. These are distinct tasks.

Design-in-the-small should precede design-in-the-large both in exposition and practice. It is by stipulation simpler, allowing earlier discovery of defects of many kinds. Design-in-the-large is about combining parts, and it is folly to address combination before achieving understanding of the parts to be combined (this is why top-down design so often fails). In a constituent behaviour intrinsic and combinational sources of complexity can be carefully separated, giving a substantial benefit both to its developers and to participants in its enactments.

[1] Frank DeRemer and Hans Kron; Programming-in-the-large versus Programming-in-the-small; IEEE Transactions on Software Engineering Volume 2 Number 2, pages 80-87, June 1976.

Links to other posts:
 ↓ Triplets:  Triplets are system behaviour elements: (Machine+World=Behaviour)

Wednesday, 25 December 2019

Software Engineering

Software engineering has two distinct facets.

The first facet is engineering OF software. Here the system is the computer; the product is the symbolic computation and its results.
The second facet is engineering BY software. Here the system is the physical world and the computer together; the product is the physical behaviour in the physical world evoked by their interaction.

The facets differ in two major ways. In engineering OF software the only physical part of the whole system is the computing equipment itself. This equipment has been developed over many decades to provide reliable physical implementation of formal operations on mathematical and other abstract objects. As Fetzer observed [1], correctness of an abstract algorithm cannot guarantee its correct execution on a physical computer; but, pace Fetzer, computers are reliable enough for many purposes, and in practice his observation can often be ignored. Further, on a reliable computer, operating systems and compilers can provide a homogeneous programming environment in which complex problems can be solved by expressing useful abstractions in a single programming language.

In contrast, engineering BY software confronts the engineer with the essentially non-formal nature of the world at the scales relevant to a cyber-physical system. In this world no mathematical or formal model of the world can be perfectly faithful, and every assertion about the properties and happenings in the world is inescapably contingent on a bottomless recursion of side conditions. Structures of abstractions cannot work perfectly: abstractions are always vulnerable to contradiction at a concrete level. For a critical system, much engineering effort must be expended in several directions to overcome or mitigate these problems. Further, the physical world for most systems is heterogeneous: its relevant properties are resistant to capture in any one language—and certainly in any programming language.

This blog is about both facets of software engineering. It's about cyber-physical systems, in which the software governs behaviour in the given physical world: that's definitely engineering BY software. (The word given is important here: the role of software engineering per se is not to modify the governed world's domains, but only their collective behaviour.) It's about engineering OF software, too: for systems of realistic size, transforming and restructuring the software for deployment and efficient execution is a vital engineering task.

[1] James H Fetzer; Program Verification: The Very Idea; CACM Volume 31 Number 9, pages 1048-1063, September 1988.

Sunday, 8 December 2019

Avoiding Failure

Programmers need to address failure concerns—to avoid the design errors that enable common failures. Failure examples include numerical overflow, floating-point underflow, null pointer dereference, memory leakage, and using uninitialised variable values: all these are well-known. Compilers for some languages—such as SPARK Ada—diagnose many of these design errors by static analysis.

In software engineering for cyber-physical systems the same need is critical. In each development task a software engineer should be explicitly aware of a checklist [1]—a catalogue of failure concerns against which the work must be checked. The whole catalogue is open-ended. Responsible engineering disciplines are characterised by steadily increasing quality and scope of the concern catalogue: when a major failure occurs in system operation, rigorous investigation and analysis refine or enlarge existing concerns or add new ones. In response to several fatal crashes of the de Havilland Comet passenger aircraft in 1949-1951, the whole recoverable debris from one crash was raised from the sea bed and reassembled. It was eventually discovered that metal fatigue cracks had developed and grown in the corners of the square windows of the fuselage. Metal fatigue was explicitly recognised as a major failure concern in aircraft flying at high speeds at high altitudes: since then, rounded passenger windows in aircraft have been universal.

In cyber-physical systems, sources of failure are unbounded. Failure is never impossible, and developments in applications and technologies continually add new possibilities. If a system is more than a toy, and the consequences of failure cannot be neglected, an established catalogue of failure concerns, and the available means of addressing them, is an essential tool. System failures usually have many contributory causes, and failure concerns overlap and interact. But we may still focus our attention on catalogues of basic failure concerns for three broad areas of development. The first is triplet concerns, in the initial isolated development of individual triplets, including incremental complication for fault tolerance. The second is combination concerns, in combining constituent behaviours and eventually assembling the complete system. The third is model concerns, addressing the ever-present difficulty of developing formal models adequately faithful to non-formal physical realities.

The development, use, and continuing improvement of explicit catalogues of concerns may seem unnecessarily bureaucratic. It is not. In the processes of development and reviewing its products, many approaches focus—understandably—on what we might call the positive thrust of development. The developers ask themselves “What are the requirements?”, and the reviewers ask themselves whether the resulting products satisfy the requirements. But this emphasis on the positive has a strong self-limiting tendency: working to an agenda whose items are achievements of product success. A complementary agenda of finding product failures is no less—perhaps far more—important. Looking for potential failures is hard. And it’s even harder if you don’t know what you are looking for.

[1] Atul Gawande; The Checklist Manifesto: How to Get Things Right; Henry Holt, 2009.

Thursday, 28 November 2019

Physical Bipartite System

The essential structure and behaviour of a physical bipartite system are shown in this diagram:



The solid rectangles represent the system S and its parts—the machine M and the governed world W. The stripes on the M rectangle indicate that M is the machine, executing software designed to achieve the system's purposes. The solid line A represents the physical interface at which M and W interact. Behaviours of the machine, the governed world, and the system comprising them are represented by the named dashed ellipses.

Focusing attention on certain aspects of the physical system while purposefully ignoring others, this view helps to structure the work of development, understanding, and analysis. Specifically, it shows that we regard the machine and the governed world—the two parts of the bipartite system—as physically disjoint. The interface A is not itself a distinct physical object. It is a collection of individual ports, channels, variables, sensors and actuators, each belonging physically either to M or to W but not to both. Physical interactions taking place between M and W are mediated by these parts of A.

Internally, the machine and the governed world are structures of interacting entities or domains. The machine may be distributed over all or part of several physical computers, each having physically distinct parts—for example, an ALU (arithmetic and logical unit), registers, and a store. The governed world of a lift system has lift shafts, floors, doors, hoist motors and cables, and sensors and switches—but not only these. To understand a behaviour we must consider all its participant domains: so the lift passengers, engineers and inspectors are also governed world domains.

The domains of the machine and of the governed world provide the infrastructure necessary for enacting their respective behaviours. Behaviours are temporal structures of phenomena—events and state values and changes—occurring within and between domains. For example: a passenger presses a button; a lift car stops rising and halts at a floor; a register value is copied into a store location; a program instruction is fetched for execution by the CPU; and so on. The domains with their occurring phenomena, and relationships among them, are the basic subject matter of system development, and therefore of the models that software engineers must make.

The behaviours represented by the ellipses are enacted by the system in operation. The behaviour B whole behaviour of the bipartite system. P and G are projections of B, restrictions respectively to phenomena of the machine and of the governed world. While insisting on the unity of the bipartite system we recognise the significance of these projections. In general, stakeholder requirements do not concern the machine: they concern the governed world and some of its wider effects and consequences. The complementary supposition—that the proper focus of software engineering concerns is the machine alone—is a crippling mistake.

Links to other posts:
 → Cyber-Physical Systems:  The essential character of a cyber-physical system
 → System Behaviour Complexity:  The component structure of system behaviour
 → Triplets:  Triplets (Machine+World=Behaviour) are system behaviour elements