Thursday, 17 September 2020

Two Use Cases

Use cases [1] are often used to document system requirements. We can identify an important aspect of the concept and its application by comparing two simplified examples: using an ATM and using a lift. The traditional depictions are a stick figure labelled ATM_User with an arrow pointing to an ellipse labelled Use_ATM, and the same graphic with labels Lift_User and Use_Lift. Each use-case describes a class of interaction episodes in which the user can obtain some desired result: the ATM_User can withdraw cash and perform other banking actions, and the Lift_User can travel from one floor to another. At first sight the two are closely parallel. Each episode takes place between the system—essentially what we call the machine—and the user, regarded as an external agent. Each use case is typically considered to be a requirement: the system must provide the desired results.

But Use_ATM and Use_Lift differ radically. Use_ATM describes a system constituent behaviour. The user is a participant, playing a role not unlike a pilot or a car driver. The user and machine engage in a close dialogue, the machine responding to each user request, sometimes by asking for further information—such as a PIN or the amount of cash desired. In Use_Lift there is no such dialogue. The user can press buttons, and enter or leave the lift at a floor, but the machine makes no response except when a button is newly pressed, when it illuminates the button and adds the corresponding floor to its set of outstanding requests. Individual users, and their comings and goings, are invisible to the system: nothing in the system behaviour corresponds to an instance of Use_Lift.

We do not consider Use_ATM to be a requirement. Rather, it is itself a behaviour design detailing the interaction between a governed world domain—the ATM_User, singled out as uniquely significant—and the rest of the system. By contrast, Use_Lift can be understood as a requirement on a behaviour NLS (Normal_Lift_Service) of the lift system. It describes action sequences available to a user at a floor lobby wanting to travel to the lobby of another floor. The requirement is that users performing certain of these action sequences are enabled to achieve their desired result. There will be further requirements of the NLS behaviour—for example, concerning dwelling time at a visited floor, lift car rest position when no requests are outstanding, maximum wait times, and more. The Use-Lift requirement is simply that the system's NLS behaviour must afford travel opportunities between floors much as the requirement on a train service is to afford travel opportunities between stations.

[1] Ivar Jacobson, Ian Spence, Kurt Bittner; USE-CASE 2.0 The Definitive Guide, 2011.

Links to other posts:
 ↑ System Behaviour Complexity: The component structure of system behaviour

Tuesday, 21 July 2020

The Value of Formalism

The value of formalism in software engineering—not least for cyber-physical systems—is palpable. As Robert Boyle wrote in 1666 [1]: “In pure mathematicks, he that can demonstrate well may be sure of the truth of a conclusion without consulting experience about it." A formal model allows reliable reasoning: mathematical proof of invariants and other formally stated requirements; confident evaluation and simplification of expressions; exhaustive checking of preconditions; simulation of a modelled executable process. These are not small advantages, and become hugely more powerful where good software tools are available. While avoiding the risk of logical fallacies, formal modelling also avoids also the pitfalls and obstacles—fuzzy phenomena, contingent physical properties, unexpected interactions, surprising physical effects, and hidden interferences—that characterise the governed world of a cyber-physical system.

Unsurprisingly, there is a price to be paid. The pitfalls and obstacles of the governed world cannot be exorcised by a formal utterance: perversely, they persist, ensuring that a formal model of the physical governed world can never be correct. The software engineering goal must be adequacy, not correctness. A model's fidelity to its subject does not embody timeless global truth, but adequacy to the conditions and purposes of its use. Achieving this adequacy is the work of a necessary practical discipline, not the QED of a mathematical proof. In the most demanding context—in a critical function of a critical system—three conceptual phases of model development can be identified: pre-formal, formal, and post-formal.

The pre-formal phase is an exploration of the governed world, captured in natural language. Natural language allows unfettered discovery because the evolving description can be continually modified and extended during the exploration. The set of primitive types can be enlarged; concepts such as causality and temporal sequencing can be conveniently introduced; higher-order statements can be added to a first order text. Only when substantial understanding has been achieved through such exploration can a language for formalisation be confidently chosen.

In the formal phase the axiomatic aspect of the governed world model is developed in the chosen formalism, describing the domain properties on which the machine will rely to ensure the desired behaviour in the world. Then the behavioural aspect is addressed, combining the axiomatic model with the software machine behaviour at the machine-world interface. Separating these two aspects avoids a potential confirmation bias. The axiomatic properties are first modelled without accompanying references to the specific intended behaviour. Then, they are instantiated in the specific setting of interaction with the machine.

In the post-formal phase the axiomatic and behavioural model aspects are critically audited for previously overlooked vulnerabilities in the physical governed world. For example, in some possible traces a necessary causal link may be disabled by a state value set in an earlier event. In another example, it may be belatedly recognised that a human participant—such as the pilot of an aircraft—may repeat an action often enough for the eventual result to cause a lethal failure.

Echoing Boyle, Herman Weyl [2] characterised the "decisive step of mathematical abstraction: we forget about what the symbols stand for." This amnesia is indeed the essence of formalism. But software engineers must remember.

[1] Robert Boyle; Preface to Hydrostatical Paradoxes made out by New Experiments, For the most part Physical and Easy, 1666.
[2] Herman Weyl; The Mathematical Way of Thinking; Bicentenial Conference, U Penn, 1940.

Links to other posts:
 ← Triplets: Triplets (Machine+World=Behaviour) are system behaviour elements
 ← Triplet Models: What models are needed of the machine and governed world?
 ← The ABC of Modelling: Axiomatic, behavioural, and consequential aspects

Thursday, 9 July 2020

The ABC of Modelling

The machine program and the governed world model are the two parts of a simple constituent behaviour. Neither makes sense alone: they must be co-designed. The model describes the properties on which the program relies, defines the behaviour in the world, and shows that the behaviour results in satisfaction of the requirements. These are the Axiomatic, Behavioural, and Consequential aspects—the A,B,C of modelling-in-the-small. Although these aspects follow an obvious logical sequence, they too must be co-designed.

The axiomatic model describes the infrastructure of governed world domains—including the human participants—and the external and internal causal links they effectuate. A link is a causative temporal relation among domain states and events; the property of effectuating the link is imputed to a specific domain. Causal links connect the phenomena of the interface sensors and actuators to other phenomena of the governed world. They are axioms—unquestioned assumptions for designing the constituent behaviour: justified by the specific context and environment conditions for which the behaviour is designed and in which alone it can be enacted. Causal links are complex, usually having multiple effects and multiple causes; in general, effectuation of a link by a domain depends on domain and subdomain states. These and other complexities must be systematically examined and documented to fulfil the axiomatic promise of unquestioned reliability.

The behavioural model is a state machine, formed by combining the axiomatic model with the machine program's behaviour at the interface. The model should be comprehensible, both to its developers and to any human participant in the enacted behaviour—for example, a plant operator or a car driver. To take proper advantage of the reliability afforded by the axiomatic model, states and events of the model state machine should be justified explicitly by reference to causal links and their effectuating domains. The behavioural model defines the trace set of the behaviour: the set of exactly those traces over events and state values that can occur in an enactment of the behaviour. It must be possible to determine whether a putative trace—for example, a trace that contravenes a relevant requirement— belongs to the trace set.

The consequential model, based on the behavioural model, aims to show that all relevant requirements will be satisfied during the behaviour enactment. Satisfaction may be an obvious consequence of the behaviour, possibly with additional causal links and phenomena. It may need informal judgment or operational experiment—for example, of emotional and psychological reactions of human participants in the governed behaviour. It may need investigation and discourse outside the scope of software engineering—for example, of profitability or market acceptance. There are rich fields of activity here, but the primary focus of this blog is on the axiomatic and behavioural models. They are emphasised because they must reflect the fundamental competence of a software engineer—to predict, with high dependability, the governed world behaviour that will result from executing the developed software.

Links to other posts:
 ↑ Physical Bipartite System: The nature of a bipartite system
 ↑ System Behaviour Complexity: The component structure of system behaviour
 ↑ Reliable Models: Reliability in a world of unreliable physical domains
 ↑ Triplets: Triplets (Machine+World=Behaviour) are system behaviour elements
 ↑ Requirements and Behaviours: Requirements are consequences of behaviours

Saturday, 20 June 2020

Reliable Models

The governed world is hard to model. It's not a formal system: nothing is constant; nothing is precise; no event is truly atomic; every assertion is contingent; every causal link is vulnerable; everything is connected to everything else. This is the right-hand-side problem: no formal model can be perfectly faithful to the reality. This matters because the designed system behaviour relies on model fidelity: deviation brings system failure. But in practice all is not lost. A single unified model is unattainable; but specific models can be adequately faithful, at some time, for some period, in some context, for some purpose. The question, then, is how to take best advantage of these local, transient and conditional episodes of adequate fidelity?

The question answers itself. The desired system behaviour is an assemblage of concurrent constituent behaviours, providing different functions in different circumstances and contexts, and imposing different demands on the governed world. This complex assemblage responds to the same variability of context, need and purpose that invalidates any single unified model. The answer to the question is clear: the model must be structured to match the behaviour structure.

This correspondence of model and behaviour structures motivates the idea of a triplet. For each simple behaviour, a triplet brings together (1) a machine program; (2) a governed world model of the specific properties on which the program depends; and (3) the governed world behaviour resulting from the interaction of (1) and (2). Concurrent constituent behaviours rely on the concurrent validity of their respective models. If two behaviours cannot overlap in time, their models need not be consistent. Each constituent model and its corresponding behaviour are designed together. Reliability of a constituent model is judged relative to the context and demands of its corresponding constituent behaviour.

This co-design of model and behaviour structure realises modelling-in-the-large. By developing each constituent model along with its associated machine program, it also focuses and simplifies the task of modelling-in-the-small. Each constituent model is small because it is limited to just those domains and causal links that participate in the corresponding behaviour. It is simple because its behaviour satisfies the criteria of triplet simplicity. For example, a governed world domain may play only one participant role in the behaviour; modelled causal properties are not themselves mutable; the operational principle of the behaviour must be simply structured. The simplicity and small size of the model also enable and guide a deeper and more careful—and carefully documented—investigation of the residual risks to reliability.

Later, when constituent behaviours are to be combined in a bottom-up assembly of system behaviour, their respective models are ready to hand, their vulnerabilities to interference already examined. The combination task is easier. Only those models are to be reconciled and combined whose associated behaviours can be concurrently enacted; difficulty in reconciling two models may rule out concurrent enactment. For example, cruise control and self-parking cannot be enacted concurrently: their functional combination makes no behavioural sense, and—not coincidentally—their governed world models are irreconcilable.

Links to other posts:
 ↑ The Right-Hand Side: Why the model-reality relationship is problematic
 ↑ System Behaviour Complexity: The component structure of system behaviour
 ← Triplets: Triplets (Machine+World=Behaviour) are system behaviour elements

Thursday, 28 May 2020

Again More Aphorisms

Alan Perlis was the first winner of the Turing Award in 1966. In 1982 he published [1] a set of 130 epigrams on programming. His aim, he explained, was to capture—in metaphors—something of the relationship between classical human endeavours and software development work. "Epigrams," he wrote, "are interfaces across which appreciation and insight flow." This post offers a few aphorisms. My dictionary tells me that an epigram is 'a pointed or antithetical saying', while an aphorism is 'a short pithy maxim'. Whatever they may be called, I hope that these remarks will evoke some appreciation and insight.

31. Premature design commitment sows the seeds of crippling technical debt.
32. Lord Kelvin said that knowledge without numbers is meagre and unsatisfactory: without clearly named concepts knowledge is fugitive.
33. The axe of Murphy's law is poised, ready to sever any causal link of a cyber-physical system.
34. At software engineering scales, the only certain property of the physical world is that nothing is certain.
35. Sensors and actuators give the machine its API for governed world behaviour; the axiomatic model of causal links gives the API its semantics.
36. The map is not the terrain, but many administrative systems try to make it so.
37. Every use case must explore reasons why a human user might behave differently and with what consequences.
38. Always welcome criticism. The boy who said the Emperor was naked was not a qualified member of the tailors' guild—but he was right.
39. Counting correctly is a valuable skill; but not everything is reducible to numbers.
40. You can't outsource understanding of the governed world to the physical engineers: your understanding must be your own.


[1] A J Perlis; Epigrams on Programming; ACM SIGPLAN Notices 17,9 September 1982.

Links to other posts:
 ↑  Triplets: Triplets are system behaviour elements: (Machine+World=Behaviour)
 ←  Ten Aphorisms: Ten short remarks
 ←  Ten More Aphorisms: Ten more short remarks
 ←  Yet More Aphorisms: Ten more short remarks

Ten More Aphorisms

Alan Perlis was the first winner of the Turing Award in 1966. In 1982 he published [1] a set of 130 epigrams on programming. His aim, he explained, was to capture—in metaphors—something of the relationship between classical human endeavours and software development work. "Epigrams," he wrote, "are interfaces across which appreciation and insight flow." This post offers a few aphorisms. My dictionary tells me that an epigram is 'a pointed or antithetical saying', while an aphorism is 'a short pithy maxim'. Whatever they may be called, I hope that these remarks will offer some appreciation and insight.

11. In a cyber-physical system, logic and physics can show the presence of errors, but not their absence.
12. To master complexity you need a clear idea of the simplicity you are aiming for.
13. No system can check its own assumptions: if it checks, they aren't assumptions.
14. In a cyber-physical system, the die is cast for success or failure in the pre-formal development work.
15. Software engineering for a cyber-physical system is programming the physical world.
16. Traceability should trace the graph of detailed development steps, not just their products.
17. A deficient development method cannot be redeemed by skilful execution.
18. A declarative specification is like the Sphinx's riddle: "Here are some properties of something—but of what?"
19. Cyber-physical systems can exhibit no referential transparency: everything depends on context, including—recursively—the context itself.
20. Natural science aims to be universal, but engineering is always specific to the current project.

[1] A J Perlis; Epigrams on Programming; ACM SIGPLAN Notices 17,9 September 1982.

Links to other posts:
 ←  Ten Aphorisms: Ten short remarks