Showing posts with label Requirements. Show all posts
Showing posts with label Requirements. Show all posts

Thursday, 17 September 2020

Two Use Cases

System requirements are often documented by use cases [1]. We can identify an important aspect of the use case concept and its application by comparing two simplified examples: using an ATM and using a lift. The traditional depictions are a stick figure labelled ATM_User with an arrow pointing to an ellipse labelled Use_ATM, and the same graphic with labels Lift_User and Use_Lift. Each use-case describes a class of interaction episodes in which the user can obtain some desired result: the ATM_User can withdraw cash and perform other banking actions, and the Lift_User can travel from one floor to another. At first sight the two are closely parallel. Each episode takes place between the system—essentially what we call the machine—and the user, regarded as an external agent. Each use case is typically considered to be a requirement on the system, which must provide the desired results.

But Use_ATM and Use_Lift differ radically. Use_ATM describes a system constituent behaviour. The user is a participant, playing a role not unlike a pilot or a car driver. The user and machine engage in a close dialogue, the machine responding to each user request, sometimes by asking for further information—such as a PIN or the amount of cash desired. In Use_Lift there is no such dialogue. The user can press buttons, and enter or leave the lift at a floor, but the machine makes no response except when a button is newly pressed, when it illuminates the button and adds the corresponding floor to its set of outstanding requests. Individual users, and their comings and goings, are invisible to the system: nothing in the system behaviour corresponds to an instance of Use_Lift.

We do not consider Use_ATM to be a requirement. Rather, it is itself a behaviour design detailing the interaction between a governed world domain—the ATM_User, singled out as uniquely significant—and the rest of the system. By contrast, Use_Lift can be understood as a requirement on a constituent behaviour NLS (Normal_Lift_Service) of the lift system. It describes action sequences available to a user at a floor lobby wanting to travel to the lobby of another floor. The requirement is that users performing certain of these action sequences are enabled to achieve their desired result. There will be further requirements of the NLS behaviour—for example, concerning dwelling time at a visited floor, lift car rest position when no requests are outstanding, maximum wait times, and many more. The Use-Lift requirement is simply that the system's NLS behaviour must afford travel opportunities between floors much as the requirement on a train service is to afford travel opportunities between stations.

[1] Ivar Jacobson, Ian Spence, Kurt Bittner; USE-CASE 2.0 The Definitive Guide, 2011.

Links to other posts:
 ↑ System Behaviour Complexity: The component structure of system behaviour

Thursday, 9 July 2020

The ABC of Modelling

The machine program and the governed world model are the two parts of a simple constituent behaviour. Neither makes sense alone: they must be co-designed. The model describes the properties on which the program relies, defines the behaviour in the world, and shows that the behaviour results in satisfaction of the requirements. These are the Axiomatic, Behavioural, and Consequential aspects—the A,B,C of modelling-in-the-small. Although these aspects follow an obvious logical sequence, they too must be co-designed.

The axiomatic model describes the infrastructure of governed world domains—including the human participants—and the external and internal causal links they effectuate. A link is a causative temporal relation among domain states and events; the property of effectuating the link is imputed to a specific domain. Causal links connect the phenomena of the interface sensors and actuators to other phenomena of the governed world. They are axioms—unquestioned assumptions for designing the constituent behaviour: justified by the specific context and environment conditions for which the behaviour is designed and in which alone it can be enacted. Causal links are complex, usually having multiple effects and multiple causes; in general, effectuation of a link by a domain depends on domain and subdomain states. These and other complexities must be systematically examined and documented to fulfil the axiomatic promise of unquestioned reliability.

The behavioural model is a state machine, formed by combining the axiomatic model with the machine program's behaviour at the interface. The model should be comprehensible, both to its developers and to any human participant in the enacted behaviour—for example, a plant operator or a car driver. To take proper advantage of the reliability afforded by the axiomatic model, states and events of the model state machine should be justified explicitly by reference to causal links and their effectuating domains. The behavioural model defines the trace set of the behaviour: the set of exactly those traces over events and state values that can occur in an enactment of the behaviour. It must be possible to determine whether a putative trace—for example, a trace that contravenes a relevant requirement— belongs to the trace set.

The consequential model, based on the behavioural model, aims to show that all relevant requirements will be satisfied during the behaviour enactment. Satisfaction may be an obvious consequence of the behaviour, possibly with additional causal links and phenomena. It may need informal judgment or operational experiment—for example, of emotional and psychological reactions of human participants in the governed behaviour. It may need investigation and discourse outside the scope of software engineering—for example, of profitability or market acceptance. There are rich fields of activity here, but the primary focus of this blog is on the axiomatic and behavioural models. They are emphasised because they must reflect the fundamental competence of a software engineer—to predict, with high dependability, the governed world behaviour that will result from executing the developed software.

Links to other posts:
 ↑ Physical Bipartite System: The nature of a bipartite system
 ↑ System Behaviour Complexity: The component structure of system behaviour
 ↑ Reliable Models: Reliability in a world of unreliable physical domains
 ↑ Triplets: Triplets (Machine+World=Behaviour) are system behaviour elements
 ↑ Requirements and Behaviours: Requirements are consequences of behaviours

Thursday, 9 January 2020

Requirements and Behaviours

Requirements are what’s wanted. Who wants them? The stakeholders—people and organisations with a stake in the system. These are the system’s owners, the funders of the development, regulatory agencies, people participating in the system behaviour in various roles, recipients of the system’s services, occupants of neighbouring facilities and amenities—and anyone else who may be affected for good or ill by the system's creation and its consequences. Each has a claim to contribute their demands, needs, purposes and desires to the bill of requirements to be satisfied.

For this blog the requirements of interest are behaviour-based: their satisfaction flows from—and depends on—the system behaviour in operation. This criterion excludes project requirements—choice of development techniques, team composition, and budget and schedule; but system functions, throughput and response times, usability, dependability, security, privacy and safety are all included. Also, because the same requirements can be satisfied by different behaviours, they should stipulate effects and consequences of the behaviour, not the behaviour itself. Putting the same point in a simple programming context: requirements should not describe the computation itself, but its desired result.

The IEEE Standard for SRS (Software Requirements Specifications) [1] seems broadly to agree: "... avoid placing either design or project requirements in the SRS". But it continues: "An SRS should be Correct, Unambiguous; Complete; Consistent; Ranked for Importance; Verifiable; Modifiable; and Traceable". The ambition is clear: requirements must state full necessary and sufficient conditions for acceptability of the system described. Scott Adams offers a wiser perspective. Dilbert asks his customer: "First, tell me your market requirements." The customer responds "No, you tell me everything you can design, then I'll tell you which one I like." Correct and complete requirements can make sense only when the desired product is already known and available: "I want an iPhone 11 Pro, Midnight Green, 64GB." In the absence of advance knowledge of the product, requirements and design must proceed together [2].

Confronted by the difficulties they pose, we are tempted to abandon the idea of requirements altogether. In place of requirements stakeholders are sometimes offered fragmented informal descriptions of what is hoped or believed to be a desirable system behaviour. For example: “When the driver presses the Resume button the car accelerates to the Target Speed,” or “If Hi-Power mode is not selected then Hi-Power mode shall be selected when the operator presses the Hi-Power button unless the battery charge indicator shows Low or Very Low.” This practice is misguided. A collection of stimulus-response fragments is a poor way to describe any behaviour: purposeful behaviour needs a text pointer, but a collection of fragments drawn from multiple unidentified constituent behaviours can have no text pointer.

Another approach relies on use-cases. But that's another story.

[1] IEEE Std 830-1998: Recommended Standard for Software Requirements Specifications; IEEE 1998(later superseded).
[2] Bashar Nuseibeh; Weaving together Requirements and Architectures; IEEE Computer 34,3, March 2001.

Links to other posts:
 → System Behaviour Complexity:  The component structure of system behaviour
 → The Text Pointer:  Why the program text pointer matters
 → Use Cases:  What are Use Cases for?

Monday, 2 December 2019

Landscape At a Distance

The landscape is what we choose to see when we look out at the physical universe from our particular point of view. We structure and interpret our view of the landscape according to our needs and purposes. For software engineers working on a cyber-physical system the landscape can look like this:

The bipartite system is the central landscape feature: its two parts are the machine and the governed world, interacting at the interface labelled a.

The machine is the computing equipment we introduce into the world. It may be all or part of several computers, but in earlier development stages we see it as one computing machine, in which we structure software to reflect behaviour system behaviour structure. Software deployment on physical computers must wait to a later stage.

The governed world contains parts of the physical world whose behaviour is governed by the machine executing the software we develop. In a critical system, the governed behaviour is the critical dependable behaviour for which the software engineers take full professional responsibility.

The requirements world is those parts of the world in which stakeholders' requirements stipulate effects and consequences that the desired system behaviour must produce. It does not include the machine, which is instrumental in satisfying the requirements but is not itself directly subject to stakeholder requirements. It includes the governed world, but it also includes behaviours to satisfy requirements that resist formal treatment—for example, ecological, social and economic concerns—and whose satisfaction is therefore not guaranteed.

The environment is the universe beyond the machine and the requirements world, extended both in space and time. Inescapably, any system is designed only for an environment niche, where enough of its design models and assumptions are valid to allow continued operation. For example, an automotive system cannot operate underwater or in an earthquake. A critical system may demand the largest possible niche, to allow survival—albeit with reduced functionality—even in extremely adverse conditions. The niche for the Fukushima nuclear plant allowed for both earthquake and tsunami, but in 2011 both occurred together, leading to a nuclear disaster.

The very terse summary in this post gives only a distant view of the environment, raising many questions but answering none of them. Some answers are given in another post that looks a little closer and a little deeper.

Links to other posts:
 →  Landscape Close-Up:  A closer view of the universe as seen by a software engineer