Thursday, 9 July 2020

The ABC of Modelling

The machine program and the governed world model are the two parts of a simple constituent behaviour. Neither makes sense alone: they must be co-designed. The model describes the properties on which the program relies, defines the behaviour in the world, and shows that the behaviour results in satisfaction of the requirements. These are the Axiomatic, Behavioural, and Consequential aspects—the A,B,C of modelling-in-the-small. Although these aspects follow an obvious logical sequence, they too must be co-designed.

The axiomatic model describes the infrastructure of governed world domains—including the human participants—and the external and internal causal links they effectuate. A link is a causative temporal relation among domain states and events; the property of effectuating the link is imputed to a specific domain. Causal links connect the phenomena of the interface sensors and actuators to other phenomena of the governed world. They are axioms—unquestioned assumptions for designing the constituent behaviour: justified by the specific context and environment conditions for which the behaviour is designed and in which alone it can be enacted. Causal links are complex, usually having multiple effects and multiple causes; in general, effectuation of a link by a domain depends on domain and subdomain states. These and other complexities must be systematically examined and documented to fulfil the axiomatic promise of unquestioned reliability.

The behavioural model is a state machine, formed by combining the axiomatic model with the machine program's behaviour at the interface. The model should be comprehensible, both to its developers and to any human participant in the enacted behaviour—for example, a plant operator or a car driver. To take proper advantage of the reliability afforded by the axiomatic model, states and events of the model state machine should be justified explicitly by reference to causal links and their effectuating domains. The behavioural model defines the trace set of the behaviour: the set of exactly those traces over events and state values that can occur in an enactment of the behaviour. It must be possible to determine whether a putative trace—for example, a trace that contravenes a relevant requirement— belongs to the trace set.

The consequential model, based on the behavioural model, aims to show that all relevant requirements will be satisfied during the behaviour enactment. Satisfaction may be an obvious consequence of the behaviour, possibly with additional causal links and phenomena. It may need informal judgment or operational experiment—for example, of emotional and psychological reactions of human participants in the governed behaviour. It may need investigation and discourse outside the scope of software engineering—for example, of profitability or market acceptance. There are rich fields of activity here, but the primary focus of this blog is on the axiomatic and behavioural models. They are emphasised because they must reflect the fundamental competence of a software engineer—to predict, with high dependability, the governed world behaviour that will result from executing the developed software.

Links to other posts:
 ↑ Physical Bipartite System: The nature of a bipartite system
 ↑ System Behaviour Complexity: The component structure of system behaviour
 ↑ Reliable Models: Reliability in a world of unreliable physical domains
 ↑ Triplets: Triplets (Machine+World=Behaviour) are system behaviour elements
 ↑ Requirements and Behaviours: Requirements are consequences of behaviours

Saturday, 20 June 2020

Reliable Models

The governed world is hard to model. It's not a formal system: nothing is constant; nothing is precise; no event is truly atomic; every assertion is contingent; every causal link is vulnerable; everything is connected to everything else. This is the right-hand-side problem: no formal model can be perfectly faithful to the reality. This matters because the designed system behaviour relies on model fidelity: deviation brings system failure. But in practice all is not lost. A single unified model is unattainable; but specific models can be adequately faithful, at some time, for some period, in some context, for some purpose. The question, then, is how to take best advantage of these local, transient and conditional episodes of adequate fidelity?

The question answers itself. The desired system behaviour is an assemblage of concurrent constituent behaviours, providing different functions in different circumstances and contexts, and imposing different demands on the governed world. This complex assemblage responds to the same variability of context, need and purpose that invalidates any single unified model. The answer to the question is clear: the model must be structured to match the behaviour structure.

This correspondence of model and behaviour structures motivates the idea of a triplet. For each simple behaviour, a triplet brings together (1) a machine program; (2) a governed world model of the specific properties on which the program depends; and (3) the governed world behaviour resulting from the interaction of (1) and (2). Concurrent constituent behaviours rely on the concurrent validity of their respective models. If two behaviours cannot overlap in time, their models need not be consistent. Each constituent model and its corresponding behaviour are designed together. Reliability of a constituent model is judged relative to the context and demands of its corresponding constituent behaviour.

This co-design of model and behaviour structure realises modelling-in-the-large. By developing each constituent model along with its associated machine program, it also focuses and simplifies the task of modelling-in-the-small. Each constituent model is small because it is limited to just those domains and causal links that participate in the corresponding behaviour. It is simple because its behaviour satisfies the criteria of triplet simplicity. For example, a governed world domain may play only one participant role in the behaviour; modelled causal properties are not themselves mutable; the operational principle of the behaviour must be simply structured. The simplicity and small size of the model also enable and guide a deeper and more careful—and carefully documented—investigation of the residual risks to reliability.

Later, when constituent behaviours are to be combined in a bottom-up assembly of system behaviour, their respective models are ready to hand, their vulnerabilities to interference already examined. The combination task is easier. Only those models are to be reconciled and combined whose associated behaviours can be concurrently enacted; difficulty in reconciling two models may rule out concurrent enactment. For example, cruise control and self-parking cannot be enacted concurrently: their functional combination makes no behavioural sense, and—not coincidentally—their governed world models are irreconcilable.

Links to other posts:
 ↑ The Right-Hand Side: Why the model-reality relationship is problematic
 ↑ System Behaviour Complexity: The component structure of system behaviour
 ← Triplets: Triplets (Machine+World=Behaviour) are system behaviour elements

Thursday, 28 May 2020

Ten More Aphorisms

Alan Perlis was the first winner of the Turing Award in 1966. In 1982 he published [1] a set of 130 epigrams on programming. His aim, he explained, was to capture—in metaphors—something of the relationship between classical human endeavours and software development work. "Epigrams," he wrote, "are interfaces across which appreciation and insight flow." This post offers a few aphorisms. My dictionary tells me that an epigram is 'a pointed or antithetical saying', while an aphorism is 'a short pithy maxim'. Whatever they may be called, I hope that these remarks will offer some appreciation and insight.

11. In a cyber-physical system, logic and physics can show the presence of errors, but not their absence.
12. To master complexity you need a clear idea of the simplicity you are aiming for.
13. No system can check its own assumptions: if it checks, they aren't assumptions.
14. In a cyber-physical system, the die is cast for success or failure in the pre-formal development work.
15. Software engineering for a cyber-physical system is programming the physical world.
16. Traceability should trace the graph of detailed development steps, not just their products.
17. A deficient development method cannot be redeemed by skilful execution.
18. A declarative specification is like the Sphinx's riddle: "Here are some properties of something—but of what?"
19. Cyber-physical systems can exhibit no referential transparency: everything depends on context, including—recursively—the context itself.
20. Natural science aims to be universal, but engineering is always specific to the current project.

[1] A J Perlis; Epigrams on Programming; ACM SIGPLAN Notices 17,9 September 1982.

Links to other posts:
 ←  Ten Aphorisms: Ten short remarks

Wednesday, 27 May 2020

Physical Domains

The governed world of a cyber-physical system is populated by many things, drawn from the limitless variety of the physical world. It may include: mechatronic things—an airplane, an industrial press, a tower crane, or a vending machine; the local natural environment—a body of water, or a part of the moon’s surface or of the earth's atmosphere; the built and engineered environment—a canal or road, a house, a bridge, or a segment of railway track; people and other living creatures, participating in the system behaviour in various roles. These things and their relationships are the infrastructure of the governed world, supporting the behaviour playing out over time in occurrences of events and state changes; we call these things domains.

A domain carries physical states and participates in physical events. States and events may be shared with other domains: this sharing is the medium in which domains interact to form the governed world's behaviour. A domain exhibits characteristic local properties, relating its state and event phenomena. An electromagnetic switching relay, for example, may be identified as a domain in the governed world. It has the property that when power is applied to its coil circuit the relay contacts close and the switched circuit is completed to power a motor. We may think of this relationship as a causal link imputed to the domain: "powering the coil causes completion of the switched circuit". These links, and the sharing of state and event phenomena among domains, enable the machine—through its behaviour at the machine-world interface of sensors and actuators—to govern behaviour in the governed world.

At the scales relevant to cyber-physical systems, effectuation of a causal link is never perfectly reliable, but always contingent. For example, a causal link may fail: the switched circuit to power a motor has been completed, but the motor shaft does not turn. Perhaps the shaft is jammed in its bearings; or the imposed load is too great; or the motor windings are burnt out; or the power relay or the power supply has failed; or another device is using too much current. The set of possible explanations is not clearly bounded. The whole-and-part structure of many domains increases vulnerability in more than one way. Each part—that is, each subdomain—not only contributes its own budget of specific potential failures: it also offers additional interfaces at which other domains to contribute to failure.

Of course, all this is just a rewording of what we all know from our everyday experience and from our interactions with the systems we encounter. But governing behaviour in the physical world is the core purpose of software engineering for cyber-physical systems, and some of its challenges are often neglected—mistakenly relegated to a lesser importance than the formal and mathematical aspects of the work. This blog's purpose is to identify and think seriously about these challenges, and to offer some ways of addressing them. It's important, because neglecting them can play a large part in potentially catastrophic system failures.

Links to other posts:
 ↑  Physical Bipartite System: The nature of a bipartite system
 ↓ Axiomatic Models:  Capturing basic assumptions for a behaviour
 → Causality: Causality provides the explanation of how a system works
 → Reliable Models: Reliability in a world of unreliable physical domains
 ← The Right-Hand Side: Why the model-reality relationship is problematic
 ← Not Just Physics: Software Engineering's unique view of the physical world

Tuesday, 12 May 2020

Yet More Aphorisms

Alan Perlis was the first winner of the Turing Award in 1966. In 1982 he published [1] a set of 130 epigrams on programming. His aim, he explained, was to capture—in metaphors—something of the relationship between classical human endeavours and software development work. "Epigrams," he wrote, "are interfaces across which appreciation and insight flow." This post offers a few aphorisms. My dictionary tells me that an epigram is 'a pointed or antithetical saying', while an aphorism is 'a short pithy maxim'. Whatever they may be called, I hope that these remarks will evoke some appreciation and insight.

21. Do your calculation and proof in the formal sphere, and your hard thinking in the informal sphere.
22. Specifiers and implementers are not adversaries: they must agree mutually benevolent assumptions.
23. Reliance on formalisation frightens away the motivating physical problem, leaving only the grin of the Cheshire Cat.
24. Use normal design wherever you can, but radical design only where you must.
25. A good method reveals the real difficulties in your problem: this is its most valuable service to you.
26. In a formal text, a name purporting to denote something in the real world is lying.
27. A single unified model of the world for a cyber-physical system is like a one-page encyclopedia.
28. Read the newest books and papers on technology; on structure and method, read old ones.
29. Formalism, like all technologies, is fun: accordingly, it's a good servant but a bad master.
30. Refusal to separate doesn't avoid the difficulties of combining: it just obscures them.

[1] A J Perlis; Epigrams on Programming; ACM SIGPLAN Notices 17,9 September 1982.

Links to other posts:
 ←  Ten Aphorisms: Ten short remarks
 ←  Ten More Aphorisms: Ten more short remarks

Sunday, 10 May 2020

More than Physics

Between the machine and the governed world the sensors and actuators provide an Application Programming Interface—an API. Where is the API manual? You have to write your own, piecing it together from information about the infrastructure of the world—the physical domains and their properties and interactions. This manual will be an axiomatic model. It's axiomatic because it records the assumptions that you have decided not to question. It doesn't describe the machine or the governed behaviour: that is the role of a behavioural model that will rely on your axiomatic model exactly as a program relies on the programming language manual. The axiomatic model describes the physical domains and—crucially—their causal links in the governed world by which the desired behaviour can be woven together and connected to the API.

What can be assumed? What can be taken as axiomatic? The laws of physics, certainly. But they are far from sufficient. Physical laws say nothing of causality. For most reasoning about governed world behaviour their scale is either too small—particle physics—or too broad—conservation of energy. At the relevant intermediate scales physical domains are irregularly shaped and structured; they are mutable and richly connected, and each is host to the interacting effects of many natural causes. Abstractions of the domains and their causal properties are inescapably informal and imperfect: everything is approximate, and nothing is unconditionally true.

In short, the task in hand is engineering, not science: the scientist's scope is the universe, but the engineer's is a specific system, narrowly bounded in space and time. This narrow bounding gives a vital clue to addressing the difficulty of cyber-physical behaviour design: there must be many axiomatic models, and their scopes must be narrow. First, system behaviour is a structure of constituent behaviours. An approach based on triplets associates a distinct axiomatic model with each distinct behaviour: the assumptions are chosen only to support the associated behaviour, and are assumed to hold only while it is enacted. Second, each causal link is explicitly attributed to a specific effectuating domain of the governed world, reflecting the operational principle [1,2] of the behaviour design. This attribution focuses the search for threats to validity of the assumed link, and sharpens and enlivens analysis of potential and actual failures. Third, in the development task of combining concurrent constituent behaviours, their axiomatic models highlight the assumptions of each that may cause interference or conflict in combination. Fourth, a critical system must maintain operation in the face of increasingly adverse conditions. For each new adversity a different behaviour, often with degraded function, may be enacted: such degraded behaviours depend only on their newly reduced axiomatic assumptions. Unlike the universal laws of physics, axioms for engineering must be local.

The fundamental expression of an operational principle—how a designed behaviour works—is in its assumed causal links. Clarity about the assumed domain properties and links depends directly on a clear separation of axiomatic and behavioural models.

[1] Michael Polanyi; The Tacit Dimension, pp39-40, U Chicago Press, 2009.
[2] Michael Polanyi; Personal Knowledge, pp328-329, U Chicago Press, 1974.

Links to other posts:
 ↑ Not Just Physics: Software Engineering's unique view of the physical world
 ↑  Physical Bipartite System: The nature of a bipartite system
 ← Causality: Causality is the explanation of how a system works
 ← Triplets: Triplets (Machine+World=Behaviour) are system behaviour elements