February 28, 2006
In resilience engineering, failure is the flip-side of the adaptations necessary to cope with the complexity of the real world, rather than a breakdown or malfunctioning as such. The performance of individuals and organizations must at any time adjust to the current conditions and, because resources and time are finite, such adjustments are always approximate. Success is been ascribed to the ability of organizations, groups and individuals to anticipate the changing shape of risk before failures and harm occur. Failure is simply the absence, temporary or permanent, of that ability.
Resilience engineering also views safety as a system property that emerges from an aggregate of components, subsystems, software, organizations, human behaviors, and their interactions. Safety is consequently something a system does, rather than something it has. It is not sufficient that systems are reliable and that the failure probability is below a certain threshold; they must also be resilient and have the ability to recover from the regular variations, disruptions and a degradation of expected working conditions. Resilience engineering addresses the need to develop systems that actively prevent control from being lost.
The aim of this book is to provide an introduction to resilience engineering of systems, covering both the theoretical and practical aspects. It is written for people who, as part of their work, are responsible for system safety on managerial or operational levels alike. Resilience Engineering will be directly relevant to professionals such as safety managers and engineers (line and maintenance), security experts, risk and safety consultants, human factors professionals and accident investigators.
Erik Hollnagel became an Industrial Safety Chair at Ecole des Mines de Paris in 2006, after having been Professor of Human-Machine Interaction at Linkoping University, Sweden, since 1999. He has previously worked with numerous industries, research institutes and universities in several countries, including the OECD Halden Reactor Project (N), Human Reliability Associates (UK), Computer Resources International (DK), University of Copenhagen (DK), and Riso National Laboratory (DK). He is an internationally recognized specialist in the fields of industrial safety, human reliability analysis, cognitive systems engineering, and complex human-machine systems and author of more than 350 publications including 12 books.
David D. Woods (Purdue ’79) is Professor at Ohio State University, Institute for Ergonomics, and Past-President of the Human Factors and Ergonomics Society. From his initial work following the TMI accident in nuclear power, to studies of coordination breakdowns between people and automation in aviation accidents, to his role in today’s debates about patient safety, he has studied how human and team cognition contributes to success and failure in complex, high risk systems. He currently serves on the National Academy of Engineering/Institute of Medicine Study Panel applying engineering to improve health care systems and on a National Research Council panel on research to define the future of the national air transportation system.
Nancy Leveson is Professor of Aeronautics and Astronautics at MIT. She works in the areas of system safety, human-computer interaction and software engineering, in a variety of industries including nuclear power, space systems, aviation, medical devices, and transportation. Dr. Leveson is a member of the National Academy of Engineering and consults widely.