Wednesday, 12 February 2020

The Text Pointer

Dijkstra in his anathema [1] on the GO TO statement explained that, for human intelligibility, progress through a program text must map easily to progress through the executed process. GO TO statements frustrate this desire. Abolish the GO TO statement, and we can substitute the clarity of structured programs for the obscurity of chaotic flowcharts. The benefit is undeniable: structured programming defines a coordinate system, elaborated to handle procedure calls and loops, for the program and process alike. Values of the text pointer (my term, not Dijkstra's) are points in this coordinate system.

But wait. Surely the program variable values already define the state of the computation, and hence a sufficient coordinate system, don't they? No, they don't. Dijkstra gives a tiny example. The meaning of a variable n, used in a program that counts how many people have entered a room, depends on whether or not the program execution has yet updated n to reflect the most recent entry. A better coordinate system for program text and program execution is a necessity. One way of thinking about this stipulation is to recognise that the text pointer itself is an undeclared program variable, implicit in the text semantics.

The famous GO TO letter combines two related—but quite distinct—themes. First, the human need to map a program's text easily to its execution: arbitrary GO TO statements make it impossible to satisfy this need. Second, the inadequacy of variable values for characterising execution progress. The letter might well have been entitled "GO TO Statement Considered Harmful and Variable Values Considered Inadequate."

Both of these themes are important in software engineering for cyber-physical systems. First, a triplet satisfying its simplicity criteria must execute a regular process—that is, its machine must be a structured program. Second, the governed behaviour in the physical world cannot be adequately described without reference to the program executed in the machine—specifically to its local variables, including—of course—the program's text pointer.

The second theme was compellingly evidenced in a project to specify an access control system in an event-based formalism. After much soul-searching, the specifiers decided that the formalism's lack of an explicit sequencing mechanism was intolerable; so they introduced an ad hoc feature for sequencing constraints on specified actions. Of course, the text pointer for a sequence was in truth the text pointer of the machine needed to govern the specified behaviour. A text pointer always denotes the progress state of a purposeful behaviour—either of the machine or of a purposeful participant in the governed world.

[1] E W Dijkstra; Go To Statement Considered Harmful; a letter to CACM, Volume 11, Number 3, March 1968.

Links to other posts:
 ↑ Triplets:  Triplets are system behaviour elements: (Machine+World=Behaviour)

Thursday, 6 February 2020

Ten Aphorisms

Alan Perlis was the first winner of the Turing Award in 1966. In 1982 he published [1] a set of 130 epigrams on programming. His aim, he explained, was to capture—in metaphors—something of the relationship between classical human endeavours and software development work. "Epigrams," he wrote, "are interfaces across which appreciation and insight flow." This post offers a few aphorisms. My dictionary tells me that an epigram is 'a pointed or antithetical saying', while an aphorism is 'a short pithy maxim'. Whatever they may be called, I hope that these remarks will evoke some appreciation and insight.

1.  The products of normal engineering have requirements, but the products of radical engineering have only hopes and aspirations.
2.  Premature formalisation cripples understanding of the physical problem world.
3.  Successful search for design errors demands a method that tells you where to look.
4.  Large structures can emerge only from bottom-up assembly; small structures may sometimes emerge from top-down decomposition.
5.  Declarative properties can be deduced from causal models, but not vice versa.
6.  Administrative systems can sometimes define problem world states in terms of the computer state; cyber-physical systems must be more honest.
7.  You can't determine what went wrong unless you understand exactly what going right would have been.
8.  To ask whether the software for a cyber-physical system is 'correct' is like asking whether your house is 'correct'.
9.  A system is not a device: devices have users, but systems have participants.
10. We lack suitable formalisations for causality, but the physical world doesn't know that.

[1] A J Perlis; Epigrams on Programming; ACM SIGPLAN Notices 17,9 September 1982.

Links to other posts:
 → Top-Down: Why top-down development is so often a mistake

Sunday, 2 February 2020

Behaviour Is the Core and Focus

David Harel and Amir Pnueli [1] confidently asserted that "A natural, comprehensive, and understandable description of the behavioural aspects of a system is a must in all stages of the system’s development cycle, and, for that matter, after it is completed too." Even this confident assertion is an understatement. System behaviour is more than a must in CPS development: it is the very heart and focus of the system, and the essential product of its development.

Developing dependable system behaviour is difficult. The essence of the task is programming the bipartite system—the quasi-formal machine, and the decidedly non-formal physical world. Making a unified program for this heterogeneous combination is hard. IT cannt be achieved by separating the two parts at a formal specification interface, because neither part can be understood without the other.

System behaviour is where the software engineers meet the stakeholders. What is the only thing the machine in a system can do? It can govern behaviour in the world. What can ensure satisfaction of the most important stakeholder requirements? The governed behaviour.

Success in software engineering for a system is success in behaviour development. Desired and dependable governed behaviour is the crucial criterion of success. An occurrence of undesired and unexpected behaviour is an instance of system failure.

Where is the chief complexity in a cyber-physical system? In the physical world interactions of the constituent behaviours that make up the whole system behaviour.

Why must software engineers develop formal models of the non-formal physical world? Because those models must guide and justify the design of the machine's software that will evoke the system behaviour.

Developing system behaviour is a prime application of the principle of incremental complication. The elementary components from which the system behaviour is constructed are triplets. A triplet is a simple machine with its governed world, and the behaviour they evoke. Triplets that can execute concurrently must be reconciled to eliminate mutual conflict, sometimes being modified from their initial simple form. In a further design stage the control of concurrency is introduced to manage contingencies such as the need for pre-emptive termination of a behaviour.

[1] David Harel and Amir Pnueli; On the development of reactive systems; in K R Apt ed, Logics and models of concurrent systems, pages 477-498; Springer-Verlag New York, 1985.

Links to other posts:
 ↑ Cyber-Physical Systems:  The essential character of a cyber-physical system
 ↓ System Behaviour Complexity:  The component structure of system behaviour
 → Triplets:  Triplets are system behaviour elements: (Machine+World=Behaviour)

Friday, 31 January 2020

Cyber-Physical Systems

Vending machines and aircraft flight-control systems, radiotherapy machines and passenger lifts, medical infusion pumps and industrial robots, railway interlocking systems and radio-controlled toys, computer-controlled machine tools and car-park control systems, chemical plants and heart pacemakers: all of these—and countless other examples—are cyber-physical systems. Some are large, some small; some complex, some simple; some safety-critical, some inconsequential. All are purposeful human constructs: computing machinery has been introduced into the world to interact directly with the physical world outside the computer and to govern its behaviour.

A cyber-physical system is inherently bipartite. The machine executes specially designed software; the governed world comprises those parts and components and occupants of the physical world whose behaviour is governed by interaction with the machine. The system is not the machine alone: it is the two-part combination of the machine with the governed world, and it is their mutual interaction that evokes the system behaviour. Neither can make sense without the other.

The metaphor govern is important here. The governed world may include: parts of the natural world—for example, the earth’s local atmosphere—that may change unpredictably; engineered devices and infrastructure elements—for example, railway track or a road bridge—that may exhibit unexpected faults; and human participants—active or passive—in the system’s behaviour who have their own purposes and may act erratically, capriciously, or even malevolently. The prefix cyber is derived from a Greek word for the steering of a ship. The properties and forces of the vessel, the sea and the weather, must be both respected and exploited to steer the ship in the desired direction. The machine does not control the world unilaterally: rather it adapts its own behaviour—and hence also the world’s behaviour—to achieve the system’s purposes. The word govern, with all of its socio-political connotations, reflects the cooperative nature of this adaptation.

Not every system interacting with the physical world is cyber-physical. Some interact with the world only to monitor or predict its behaviour, leaving it unaffected. A meteorological system is not cyber-physical: it doesn’t change the weather. A car’s GPS navigation system is not cyber-physical: it monitors information about the local road system and the car’s position and trajectory, but it doesn’t drive the car or control the traffic lights. Some systems interact with the world but affect it only indirectly, by producing outputs for interpretation and response by human actors: these, too, fall short of the cyber-physical criterion. The GPS system does not become a cyber-physical system by virtue of advising the driver to change lane or to turn left. In a cyber-physical system, the machine causes physical effects directly, unmediated by any possibility of human intervention.

A cyber-physical system is unlikely to be strictly cyber-physical in all its parts and aspects. Many systems have a core cyber-physical functionality surrounded by more mundane ancillary functions. The core function of a railway interlocking system is governing train movement. One ancillary function captures the current layout of the track in a machine-readable form; another manages a track maintenance schedule and calculates its impact on train services; another displays train arrivals and departures in a station. These ancillary functions are not intrinsically cyber-physical; but in system operation they interact closely with the interlocking function itself, which controls train movements, and this interaction takes place without human intervention. The core interlocking function, with which they are interleaved, imbues the whole system with its cyber-physical character and its critical safety concerns.

Wednesday, 8 January 2020

System Behaviour Complexity

Your car is a cyber-physical system. You’re driving on the highway. The engine is powering the wheels through the drive train and the car is responding to the controls: that’s the core automotive behaviour. Cruise control is active; air conditioning is cooling the cabin; the lane departure warning feature is monitoring the car’s position in the lane; anti-skid braking is ready to avoid wheel lock in an emergency stop; active suspension is smoothing out bumps and maintaining safe attitude in acceleration and cornering. These are concurrent constituent behaviours of the complex system behaviour.

Other constituent behaviours are not active right now: the car can park itself automatically; a speed limiting feature can override reckless driving; stop-start can save fuel, turning off the engine when the car halts and restarting automatically when the driver wants to move away; each regular driver’s chosen settings of seat, wheel, and mirror positions can be saved, and restored automatically when the driver’s personal ignition key is used on another occasion. These too are constituent behaviours, available to contribute to the complex behaviour of the whole system.

Your car is just one vivid illustration: complex system behaviour structured from simpler behaviour constituents is characteristic of realistic cyber-physical systems—aircraft, process plants, railway interlocking, even medical infusion pumps. In these complex behaviours dependability is vital—and demands structure.

For any structure two concerns are basic. First: how should we choose and construct the parts? Second: how should we design the connections between them? In a famous 1975 paper [1], Frank DeRemer and Hans Kron argued that for a large program of many modules programming-in-the-small and programming-in-the-large are distinct intellectual activities. For developing complex system behaviour the same argument is compelling. Behaviour-design-in-the-small identifies and develops individual constituent behaviours. Behaviour-design-in-the-large structures their relationships and interactions. These are distinct tasks.

Design-in-the-small should precede design-in-the-large both in exposition and practice. It is by stipulation simpler, allowing earlier discovery of defects of many kinds. Design-in-the-large is about combining parts, and it is folly to address combination before achieving understanding of the parts to be combined (this is why top-down design so often fails). In a constituent behaviour intrinsic and combinational sources of complexity can be carefully separated, giving a substantial benefit both to its developers and to participants in its enactments.

[1] Frank DeRemer and Hans Kron; Programming-in-the-large versus Programming-in-the-small; IEEE Transactions on Software Engineering Volume 2 Number 2, pages 80-87, June 1976.

Links to other posts:
 ↓ Triplets:  Triplets are system behaviour elements: (Machine+World=Behaviour)

NATO and Vincenti

Programmable electronic computers emerged in the late 1940s. Just twenty years later, amid ubiquitous project failures, a 'software crisis' was identified. In 1967 the NATO Science Committee sponsored an international conference. The conference title itself—'Software Engineering'—carried an admonition: software development lacked essential theoretical foundations and practical disciplines of the kinds that had brought success in established branches of engineering.

The conference reports [1,2] show that among much—illuminating and sometimes brilliant—discussion of software development itself, the admonition of the conference title was largely unheard. Doug McIlroy pleaded for an industry of software components to emulate the availability of physical components. Andy Kinslow begged for fewer innovations and more progress towards normal designs: "I would like, just once," he said, "to try an evolutionary step instead of a confounded revolutionary one." And similar hints came from a few other participants. But no-one had been invited from the established engineering branches that the conference was intended to emulate. A sceptic might have asked: "If software engineers need to emulate structural engineers, why don't they invite them to a joint conference?"

A narrow focus on software development can be broadened by thoughtful reading. A wonderful book—a recommendation I owe to Tom Maibaum—is What Engineers Know and How They Know It [3], by Walter Vincenti. Vincenti draws on specific case studies from 1915 to 1945, to illustrate deep insights into the foundations, concepts and practices of aeronautical engineering in those years. He discusses the need for a community of cooperating practitioners to build up established normal design practices and products—such as Andy Kinslow begged for. He discusses the resolution of conceptual issues in the function and operation of an airplane: is the pilot an airman—almost like a bird—or a chauffeur—like the driver of a car? He illustrates a theoretical foundation for practice in the idea of a control volume—a spatial volume enclosed by an imaginary control surface that gives bounded form to a potentially unbounded problem. He illuminates the problem, and eventual solution, of satisfying what today we might call non-functional requirements by the twenty-five year effort to understand what was meant by flying qualities and how they could be satisfied. He discusses specialisations ranging from the overall design of an airplane to the long communal effort to engineer an effective design for the riveting that attaches the metal skin to an airplane's metal frame.

Thoughtful consideration of Vincenti's writings (and, not least, his copious footnotes) reveals their relevance to software engineering, and offers many lessons and insights that software engineers would be foolish to ignore.

[1] Peter Naur and Brian Randell eds; Software Engineering: Report on a conference sponsored by the NATO SCIENCE COMMITTEE, Garmisch, Germany, 7th to 11th October 1968; NATO, January 1969.
[2] J N Buxton and B Randell eds; Software Engineering Techniques; Report on a conference spon-sored by the NATO SCIENCE COMMITTEE, Rome, Italy, 27th to 31st October 1969; NATO April 1970.
[3] Walter G Vincenti; What Engineers Know and How They Know It: Analytical Studies from Aeronautical History; The Johns Hopkins University Press, Baltimore, paperback edition, 1993.