Å·±¦ÓéÀÖ

Jump to ratings and reviews
Rate this book

The Field Guide to Understanding Human Error

Rate this book
When faced with a human error problem, you may be tempted to ask 'Why didn't they watch out better? How could they not have noticed?'. You think you can solve your human error problem by telling people to be more careful, by reprimanding the miscreants, by issuing a new rule or procedure. These are all expressions of 'The Bad Apple Theory', where you believe your system is basically safe if it were not for those few unreliable people in it. This old view of human error is increasingly outdated and will lead you nowhere. The new view, in contrast, understands that a human error problem is actually an organizational problem. Finding a 'human error' by any other name, or by any other human, is only the beginning of your journey, not a convenient conclusion. The new view recognizes that systems are inherent trade-offs between safety and other pressures (for example: production). People need to create safety through practice, at all levels of an organization. Breaking new ground beyond its successful predecessor, The Field Guide to Understanding Human Error guides you through the traps and misconceptions of the old view. It explains how to avoid the hindsight bias, to zoom out from the people closest in time and place to the mishap, and resist the temptation of counterfactual reasoning and judgmental language. But it also helps you look forward. It suggests how to apply the new view in building your safety department, handling questions about accountability, and constructing meaningful countermeasures. It even helps you in getting your organization to adopt the new view and improve its learning from failure. So if you are faced by a human error problem, abandon the fallacy of a quick fix. Read this book.

252 pages, Paperback

First published January 1, 2002

220 people are currently reading
2,615 people want to read

About the author

Sidney Dekker

41Ìýbooks55Ìýfollowers
Sidney W. A. Dekker (born 1969, "near Amsterdam"),is a Professor at Griffith University in Brisbane, Australia, where he founded the Safety Science Innovation Lab. He is also Honorary Professor of Psychology at the University of Queensland.

Previously, Dekker was Professor of human factors and system safety at Lund University in Sweden,where he founded the Leonardo da Vinci Laboratory for Complexity and Systems Thinking, and flew as First Officer on Boeing 737s for Sterling and later Cimber Airlines out of Copenhagen. Dekker is a high-profile scholar and is known for his work in the fields of human factors and safety.

Ratings & Reviews

What do you think?
Rate this book

Friends & Following

Create a free account to discover what your friends think of this book!

Community Reviews

5 stars
273 (45%)
4 stars
212 (35%)
3 stars
95 (15%)
2 stars
17 (2%)
1 star
3 (<1%)
Displaying 1 - 30 of 67 reviews
Profile Image for Morgan Blackledge.
785 reviews2,561 followers
September 2, 2021
Sidney Dekker writes about risk management with clarity, precision, intensity and commitment.

Dekker argues that human error is an attribution assigned in hindsight, based on a point of view, that traces time backwards, beginning with an accident.

In other words: people start with the critical incident, assume human error was the cause, work backwards in time, identify the “bad behavior�, and assign blame accordingly.

The obvious problem to anyone who is familiar with experimental methods is, starting with an assumption and working backwards is not a reliable way to establish causation.

Not even close.

Hindsight bias in practice means that you can take two otherwise identical situations, and run situation (A) with a critical incident, and situation (B) without, and draw completely different conclusions regarding the “chain of events� that leads to each outcome, based on your initial assumptions.

The reality is causation is extremely hard to establish.

Even in tightly controlled experimental environments.

But in the real world, with its millions of intersecting factors and uncontrollable variables.

It’s close to impossible.

And when it comes to preventing big accidents, like plane crashes and nuclear power plant failures, correctly identifying the actual problem is the lion’s share of prevention.

Dekker observes that employee complacency is the ubiquitous knee jerk explanation for accidents. But further quips that most of the time, the things that explain everything usually explain very little.

Dekker assumes that people (by and large) want to do a good job, and desperately want to avoid making awful mistakes, that could get them fired or injured.

Dekker asserts that human error is not a reason for an unfortunate event. It’s almost always better conceived of as the consequence of flaws in design or something deeper within the organization.

Rather than chalking organizational and design issues up to human error. Management should take responsibility, learn from it and modify the system to reduce the likelihood of the problem reoccurring.

In other words.

If we blame employees or worse, clients.

We loose the opportunity to learn and become a better organization.
Profile Image for Matt.
9 reviews
September 28, 2019
Must read for everyone in a leading position.
Great read for everyone else cause it probably will make you less of a dick at the workplace. ;)
Not super exciting read, but an important one.
Profile Image for Ali Sattari.
125 reviews34 followers
January 9, 2020
A concise book with to the point argument and examples and with a pretty good narrative. Most if not all of arguments can be made for incident management and firefighting in IT and Software industry as well.
Profile Image for Mike Pearce.
20 reviews2 followers
March 15, 2017
This book is an absolute must read for anyone "in charge of" or responsible for people. Superb view on "human error".
Profile Image for Matthew Horvat.
20 reviews9 followers
August 30, 2009
Have a safety incident that you are pissed about? Ever kick someone off the job for not wearing their safety glasses? I happen to be following the safety officer when he asked the drywall guy to send his plaster home for the day. This was the second confirmed occurrence of the violation. Later I happen to be in a planning meeting when the plaster foremen informed the group that they were late delivering what was needed because of the safety officer.

Nobody ever asked the guy why he removed his glasses. It made sense for him to do it even after he had been formally requested to use them again. I wonder what is going to happen tomorrow? He'll either be back fighting the same problem or someone else will be in his way.

Sidney Dekker outlines circumstantial evidence of what occurs during accidents. He discusses learning about the mindset of the participants when incidents occur and gives advice about getting angry or indignant at the accident and how doing so prevents learning. In this field guide he continues to give practical recommendations about how to move forward with investigating human error and talks on why it is so frequently done with no learning results.

Accidents can no longer be tolerated to be viewed as individual accidents but rather should be viewed as systemic problems. Everyone is responsible, vulnerable and able to change the system. We can't just stop searching for the Bad Apple; we must broadly embrace the New View - the system view. "Problems are at least as complicate as the organizations that created them."

While reading the possibility of a New Safety Department opened. One where learning was at the forefront; people were interested in exposing real sources of operational vulnerability followed by building a better system� Nobody was retrained, reprimanded, proceduralized or tech'ed when an accident occurred. PDSA takes on a whole new meaning and a possibility for a construction site that is not imminently dangerous.
Profile Image for Mark McGranaghan.
25 reviews20 followers
February 19, 2014
This book examines "human error" as the natural outcome of systems where "people were doing reasonable things given the complexities, dilemmas, trade-offs and uncertainty that surrounded them."

It argues that that systems "are not basically safe" but that "safety needs to be created through practice, by people", especially when there is a lot of technology and automation involved.

It argues for confronting the tradeoff between production and safety, and examining how organizational and management messages about the relative importance of conflicting goals cascades to front-line decisions.

Overall a great resource for engineering/operations managers looking to improve safety outcomes in their organizations.

Profile Image for Leszek Godlewski.
70 reviews3 followers
July 30, 2023
What an absolute slog to read.

It was recommended to me as "life-changingly insightful"; that was a massive overstatement. There have been a few interesting points (quite obvious in hindsight, however, which is ironic, considering the contents of the book), but *THE REPETITION*� Holy crap.

Perhaps if one consults the book like a manual while actually working on safety, then it's bearable. However, I took it as a continuous read, thinking the "field manual" part of the title is figurative. It is not. I ended up skipping over entire paragraphs, because they were restating the same points over and over AND OVER AND OVER again, not even aiming for variety in language.

Brace yourself before reading.
Profile Image for Engin.
16 reviews8 followers
July 19, 2019
This book should be required reading for anybody responsible for anything. Written as a field guide for aviation accident investigators and contains a lot of very specific tips and tricks and details. While these might be less relevant for people from other industries the basic idea on how to approach and understand human error remains 100% valid.
Profile Image for Nathan Powell.
24 reviews
November 4, 2012
Excellent book in my opinion. If you want to think about error and mistakes and how to prevent them systemically this is the book for you. It is written from the perspective of accidents and safety, however all professionals that want to prevent errors will get something from this book.
Profile Image for Brad.
9 reviews9 followers
January 14, 2014
Definitive book on an important subject, but it felt repetitive, like the entire book could have been one quarter the length. Still, a must read for anyone who deals with complex systems and, well, humans.
Profile Image for João Paiva.
44 reviews6 followers
November 17, 2021
This is a book about safety, with particular focus on the rule of humans in systems. I already agreed with its main premise before reading it. Using "human error" as the main cause of an incident ignores the real problems and creates a missed opportunity for further learning. The book goes into detail on the consequences of using human error as a classification. These include poor follow-up actions, creating a culture of fear (and prone to further error), among others.

Despite being aware of its main premise, I took valuable learnings from the book. I loved the notion of counterfactuals (i.e. what *didn't* happen) and how they're not particularly useful in incident reviews. Another notion that resonated with me was the distinction between the blunt and sharp ends of work. They roughly represent operational vs non-operational work, and both influence safety. Finally, I also found very interesting the observation that when we add technology to improve safety, the result may not be less incidents; we often use that margin to improve performance instead.

A few highlights:
On reprimanding people who did an error. "Getting rid of Bad Apples tends to send a signal to other people to be more careful with what they do, say, report or disclose. It does not make 'human errors' go away, but does tend to make the evidence of them go away"

On the effect of new safety technology. "Although first introduced as greater protection against failure (more precise approaches to the runway with a Head-Up-Display, for example), the new technology allows a system to be driven closer to its margins, eroding the safety margin that was gained.

On drifting into failure. "Murphy's law is wrong. What can go wrong usually goes right, and then we draw the wrong conclusion: that it will go right again and again, even if we borrow a little more from our safety margins."
Profile Image for Gijs Limonard.
1,113 reviews26 followers
August 12, 2022
Excellent, practical and actionable insights in this easy to read ‘field guide�. The author sums it up in a single paragraph;

Human error is not the cause of failure, but the effect. So ‘human error,� under whatever label (“loss of situation awareness,� complacency, inadequate CRM) can never be the conclusion of your investigation. It is the starting point. Explaining one error (for example, operator error) by pointing to another (inadequate supervision, deficient management, bad design) does not explain anything. It only judges other people for not doing what you, in hindsight, think they should have done.
Profile Image for Grant.
11 reviews4 followers
March 22, 2025
Dekker's written the textbook for Human/Organizational Performance that everything in the HP/HOP community expands from. If you are on the HP/HOP journey and this book is missing from your list, stop what you are doing and read this book!
Profile Image for Chris Weatherburn.
AuthorÌý1 book1 follower
February 13, 2021
Here is my summary, this is a book about human error, so unsurprisingly focuses on airline and medical industry. Is safety making sure those few things don’t go wrong, or that as many things as possible go right, which includes not necessarily severe ones, there is a balance to be made. The old view of safety sees people as a problem to control whereas the new view of safety sees people seen as a resource to harness.

Explains the hindsight bias. Finding out about an outcome increases the estimate we make about its likelihood. In other words, as a retrospective reviewer who knows the outcome of an event, you exaggerate your own ability to predict and prevent the outcome. Of note most are not aware of this bias when analysing adverse events.

The outcome bias. Once you know the outcome, it changes your evaluation of decisions that led up to it. If the outcome is bad, then you are not only more willing to judge the decisions, but also more likely to judge them more harshly.

Divide an operational system into a sharp end and a blunt end:
At the sharp end (for example the train cab, the cockpit, the surgical operating table), people are in direct contact with the safety-critical process.
At the blunt end is the organization or set of organizations that both supports and constrains activities at the sharp end (for example, the airline or hospital; equipment vendors and regulators).

Consider starting an investigation at blunt end rather than sharp end.

Make an effort to understand ‘human error,� and avoid micro-matching or cherry-picking. You have to put yourself in their shoes at the time imagine that you don’t know the outcome. Try to reconstruct which cues came when, which indications may have contradicted them. Try to envisage what was unfolding with a trickle or flow of cues and indications could have meant to people, given their likely understanding of the situation at the time.

Try to understand their understanding of the situation was not static or complete, as yours perhaps is in the review situation. There was an incomplete, unfolding and uncertain.

There are a few names for human error.
Ineffective crew resource management (CRM) why a plane crashed, the failure to invest in common ground, to coordinate operationally significant data among crewmembers.

“Loss of situation awareness� the failure to notice things that in hindsight turned out to be critical.
Complacency is also a name for ‘human error� which is the failure to recognize the gravity of a situation or to follow procedures or standards of good practice. Complacency is an incorrect strategy that leads to sub-optimal monitoring. Important signals may be missed because of operator complacency, because they have too great a trust in their systems doing the right thing. It is essential in the battle against complacency to help retain situation awareness, otherwise they keep missing those warning signals.

“Non-compliance with procedures is the single largest cause of ‘human error� and failure�. This book clearly points out labelling things isn't really helpful. Commonly it is perceived there is a need to establish the root cause � however there is often not a single root cause an in fact many factors interplay.

There is a concept known as plan continuation in which early and strong cues suggest that sticking with the original plan is a good, and safe, idea. Only later, and weaker, cues suggest that abandoning the plan would be better. In hindsight, it is easy to forget to see the cues from the point of view of people at the time, and when and how strongly they appeared.
You must appreciate that something can only take moments and a very small amount of time but afterwards a large amount of time can be spent studying the adverse outcome in which time is not as crucial a factor.

Dynamic fault management is typical for event-driven domains in which we must appreciate when a situation is unfolding one must bear in mind that people have to commit cognitive resources to solving them while maintaining process integrity. i.e. other things don’t stop � people need to keep the aircraft flying (or the patient breathing) while figuring out what is going wrong.
Not trouble-shooting or correcting may challenge the integrity of the entire process.
“Investigation� suggests that the ultimate aim is to find out where things went wrong, to be able to offer the one official or definitive account of what happened. Suggests that the is missing the point as there often isn’t one thing or person hat is the cause. The ultimate aim is to learn and improve.

Checklist and procedure assumptions
Assumption 1—The environment is linear.
Assumption 2—The environment is predictable in which tasks and events can all be exactly anticipated, both in nature and timing.
Assumption 3—The environment is controllable.

It is worth acknowledging that complacency may arise when you perceive that an automated process is highly reliable, operators may not merely trust it, but trust it too much, so that they fail to monitor the variables often enough.

There are different models to evaluate errors
Hazard Triangle
Swiss cheese
Chain of events
Barrier model

All have different advantages but need to think what the factors are present when you decide which model to display and show in the model. In addition there may not be a clear time line.
Often trade-offs occur when one aspect of safety conflicts with another part of the business process. These little trades off are to be negotiated and resolved in the form of thousands of little and larger daily decisions and trade-offs. In time these are no longer decisions and trade-offs made deliberately by the organization, but by individual operators or crews.

What then is accepted as risky or normal will shift over time:
as a result of pressures and expectations put on them by the organization;
as a result of continued success, even under those pressures. This is known as drift into failure. Drift happens insidiously.

Murphy’s law is wrong. What can go wrong usually goes right, and then we draw the wrong conclusion: that it will go right again and again. It is with this that we borrow a little more from our safety margins.

A safety culture is a culture that allows the boss to hear bad news.

What presents difficulty on a daily basis, the often encountered workarounds and frustrations? Such things might indeed be better predictors of system safety and risk than your formally reported incidents. To apply this principle when you do your next safe work observations, do not walk around telling people how they are supposed to work. Try to understand why they work the way they do, and why it is, or seems, normal for them at the time.

If you are running a safety department try to be the concerned outsider who understands the inside, Independent of how you get your safety intelligence. Aim to establish constructive involvement in management activities and decisions that affect trade-offs between safety and efficiency. In terms of qualifications just being a practitioner (or having once been one) does not in itself qualify a person to be a key member of a safety department. Safety staff members should want to be educated in safety management.

Safety has increasingly morphed from operational value into bureaucratic accountability. Those concerned with safety are more and more removed organizationally, culturally, psychologically from those who do safety. Workers who perform critical work at the sharp end can see the safety processes which develops or is enforced bureaucratically by those who are at a distance from the operation, fantasy documents.� Fantasy documents bear no relation to actual work or actual operational expertise.

Avoid disputes based on one group vs another group, such as one set of job roles versus another.
So can human error� go away? The answer isn’t as simple as the question. A ‘human error� problem, after all, is an organizational problem. It is at least as complex as the organization that has helped create it. To create safety, you don’t need to rid your system of ‘human errors�. Instead, you need to realize how people at all levels in the organization contribute to the creation of safety and risk through goal trade-offs that are legitimate and desirable in their setting.

Rather than trying to reduce “violations,� aim to find out more about the gap between work-as-imagined and work-as-done—why it exists, what keeps it in place and how it relates to priorities among organizational goals (both stated and unstated).

Aim to learn about authority - responsibility mismatches in which you expect responsibility of your people, but the situation are not giving them authority to live up to that responsibility.

You know your organization is improving when it tries to learn about safety including if your organization is calibrating whether its strategies for managing safety and risk are up-to-date.
Every organization has room to improve its safety. What separates a strong safety culture from a weak one is not how large this room is. The most important thing is that organization is willing to explore this space, to find leverage points to learn and improve.

If you liked this feel free to check out my website for more:
VLOG summary:
This entire review has been hidden because of spoilers.
Profile Image for Tõnu Vahtra.
593 reviews93 followers
September 26, 2018
“Underneath every simple, obvious story about ‘human error,� there is a deeper, more complex story about the organization.� I expected more substance from this book but since it was relatively short then you cannot really go too much into details, also the first version of the book (first written in 2002) thus the examples were from aerospace and mining industries, it's about time to cover this topic in more detail in the context of IT organization. I would not consider the "old approach to safety culture" an approach at all but just ignorance when an organization is just aiming to hide its incidents/risks and prove to regulators that their minimal due diligence is done. Significant amount of the book focused in hindsight bias and the risks from tackling safety issues through bureaucratic controls. Some out-of the box thinking also comes in hand, for example the recommendation during WW II that you should reinforce those parts of the fighter planes that are not full of bullet holes (because such planes made it back, the ones that were shot elsewhere did not). It is very important to observe how the work is done in real world VS the established procedures that might not be fit for purpose.

“Safety improvements come from organizations monitoring and understanding the gap between procedures and practice�

“People do not come to work to do a bad job. Safety in complex systems is not a result of getting rid of people, of reducing their degrees of freedom. Safety in complex systems is created by people through practice—at all levels of an organization.�
Profile Image for CA Junior.
14 reviews
April 13, 2025
Entender o erro humano não como a causa de acidentes, mas como reflexo de falhas e problemas sistêmicos, tecnológicos e organizacionais, é essa a proposta deste livro. É um debate relevante para profissionais de qualquer área, especialmente aqueles que lutam para entender o que houve de errado visando evitar acidentes futuros. Apontar o dedo para alguém, achar culpados, dizer que foi erro humano, não basta. O erro é o início da investigação, e não o seu fim.

Se, por um lado, este livro é valioso para profissionais das mais diferentes áreas, por outro o autor se foca em exemplos ligados ao campo da aviação, e isso pode ser um problema. Não que o campo da aviação não seja rico em fatores humanos, investigação de acidentes e segurança, muito pelo contrário, mas sim porque pode ser difícil explorar com profundidade os exemplos reais apresentados ao longo do texto. O acidente com a aeronave da AA na Colômbia, por exemplo, é extremamente rico e merecia uma descrição e debate muito mais longos do que aquele apresentado por Dekker. Para quem nunca leu sobre este evento, tenho receio que a breve descrição apresentada no livro não seja suficiente para uma reflexão real.

O livro está dividido em duas partes, na primeira o autor defende que o erro humano na verdade é apenas uma reação tomada a partir do contexto existente naquele momento, das pressões existentes, das informações recebidas, do foco dado, das falhas organizacionais e tecnológicas, entre muitos outros fatores. A ação fazia sentido no contexto daquela pessoa, esse é o ponto principal. Ou seja, o erro não ocorreu em função de uma ação sem sentido tomada por um humano sem inteligência. Cabe ao investigador conseguir entender este contexto de decisão. A segunda parte do livro é mais focada em como um investigador pode trabalhar neste sentido. Enquanto a primeira parte é rica em exemplos reais, a segunda parte é mais árida e direta, com o autor se preocupando mais em criar uma lista de passos que o investigador pode adotar. Infelizmente faltam exemplos de investigações reais onde essa técnica foi aplicada, trechos de relatórios onde se possa ver como isso aconteceu.

Seja como for, a ideia defendida pelo autor não é mais nova ou inédita nos dias de hoje, o que não torna o livro menos importante - muito pelo contrário. O que talvez falte neste livro é discutir os momentos nos quais existe, sim, a culpa. Ou quem cometeu o erro tem culpa e, dentro de uma cultura justa, deve ser punido. Outros livros exploram a aplicação dos diagramas de culpabilidade e a construção de uma cultura justa nas organizações. Dizer que tudo é culpa do erro humano não ajuda na segurança, mas também dizer que nunca existe a culpa é ignorar inúmeros casos reais.

Profile Image for Maria.
4,418 reviews111 followers
July 2, 2019
Dekker works in the field of disaster recovery/study. Trying to learn from plane crashes what happened and what were the factors that lead to the accident. He gives several examples of reports and points out that in order to learn from these reports and be more safe in the future, companies can't just point to "Bad Apples" and add another safety check. Learning how the system works and it's pressures helps insiders understand it's weaknesses and know where outside or inside safety procedures could make things better.

Why I started this book: I'm slowly working my way thru my Professional Reading titles...

Why I finished it: This was a hard book for me... I checked out the first edition and it took a while to get into the groove, so much so, that it was due back at the library. The hold list was long enough that I started looking for alternative sources and found an audio copy of a much later edition. Big improvement. Dekker points out that there is a huge difference from pointing out who is to blame and from learning from an accident so that it doesn't happen again. People want to be safe at work, and their actions at the time make sense to them in the context of the demands of the job and information available. More bureaucracy isn't always the solution and unless safety procedures are reviewed regularly they become burdensome and people stop complying.
Profile Image for Joel Bastos.
AuthorÌý1 book25 followers
September 2, 2019
An excellent crash course on gaining perspective when trying to understand "Human Error". In a nutshell, you should never blame human error when trying to understand the real causes of a failure, as it will hinder ineffective the analysis, preventing the required visibility from understanding the causes.

In highly complex systems, it's much easier to blame "Human Error" rather than unravel the web of complexity, the lack/bad processes in-place, the never-ending interdependencies and the perspective of the ones involved, which in hindsight makes it enticing to blame Humans.

A few of my favourite quotes:

"Asking what is the cause, is just as bizarre as asking what is the cause of not having an accident. Accidents have their basis in the real complexity of the system, not their apparent simplicity."



"Saying what people failed to do has no role in understanding 'human error'."



"There is almost no human action or decision that cannot be made to look flawed and less sensible in the misleading light of hindsight. It is essential that the critic should keep himself constantly aware of that fact."



"Underneath every simple, obvious story about 'human error', there is a deeper, more complex story about the organization."

Profile Image for Andy.
1,914 reviews576 followers
July 13, 2024
This approach to understanding workplace mishaps reminded me of the public health perspective as opposed to the bio-medical model for understanding disease. Dekker keeps talking about going upstream, looking at conditions, etc.
He undermines conventional explanations, like "human error," "the Swiss cheese model," etc. He explains how they are dangerous because they lead to bad solutions like punishing people who report errors or creating redundant barriers that increase complexity. Things like the swiss cheese model seem useful because they can help explain the last few minutes of a mishap, but they don't get at deeper causes.

One thing that disappointed me was the recurring use of "accident" even though he is otherwise clearly explaining the concepts of "man-made disaster" and underlying risks.

The book is quite short, and contains true stories about crashes and such. My favorite anecdote was the one about putting armor where returning bomber planes didn't have holes, not where they had holes. (This is a good illustration of the 2x2 table or "critical experimen" concept. which is very basic in science, but seems to get forgotten all the time in the real world.)
Profile Image for Cherie.
166 reviews1 follower
May 7, 2024
Reading required for my graduate program. In terms of fault diagnosis this book explores the root cause(s) for human error. I do appreciate Sidney Dekker's explanation; often human error is really a systemic error. Various real-life examples spring to mind while listening to the book.

I do question if genuine human error can be fully eliminated? I understand there is fault in the Bad Apple theory although are there some cases human error really is the result of a "Bad Apple"? An individual that is insubordinate and has the desire to cause disruption. Mr. Dekker operates form an inherently good perspective regarding people. I think like, other things, inherently good people might be on a bell curve. Most people are inherently good although occasionally, there truly are some "bad apples". Overall I agree with Mr. Dekker. This is one of those psychology books providing proof to a theory that seems obvious but requires evidence for people to embrace the reality and not follow false science.
Profile Image for George Polykratis.
33 reviews26 followers
May 15, 2023
Recommend to everyone. It is up to us to stop perpetuating the old view of human error and establishing a just culture. This book will help you see accidents around us (or lack thereof) in a different light. Even philosophicaly we can benefit from this book just by replacing "normalization of deviance" to "normalization of moral deviance" and our instincts to externalize and personalize blame (evil). Also some parts of this and Todd Conklin's book 'Pre-Accident investigations' reminded me of Roy Baumeister's book 'Evil' if you compare the system of an organization to ourselves:
You do not have to give people reasons to be violent, because they already have plenty of reasons. All you have to do is take away their reasons to restrain themselves. Even a small weakening of self-control might be enough to produce a rise in violence. Evil is always ready and waiting to burst into the world.

A system has already all the necessary components to fail.
Profile Image for Harry Harman.
803 reviews17 followers
September 9, 2021
If you want to understand ‘human error,� see the unfolding world from the point of view of people inside the situation� not from the outside or from hindsight.

Fitts and Jones argued, we should change the tools, fix the
environment in which we make people work, and by that we can eliminate the
errors of people who deal with those tools. Skill and experience, after all, had
little influence on “error� rates: getting people trained better or disciplined
better would not have much impact. Rather change the environment, and
you change the behavior that goes on inside of it. Note also how Fitts and
Jones did not call these episodes “failures.� Instead, they used the neutral term
“experiences.� We could all learn a lot from their insights, their understanding,
their open-mindedness and their moral maturity.

The focus on ‘human error� very quickly becomes a focus on humans as the cause of safety trouble, and on humans as the targets for intervention. But this has long been shown to be a limited safety endeavor, as getting rid of one person does not remove the conditions that gave rise to the trouble they got into.

The point of a New View ‘human error� investigation is not to say where people went wrong (that much is easy). The point is to understand why they thought they were doing things right; why it made sense to them at the time.
Profile Image for Jan D.
166 reviews15 followers
May 3, 2018
A systemic view on human errors. The title is actually a bit misleading, since the book states that “human error� is caused by an interplay of incentives, culture, authority/responsibility and co-existing pressures, and rarely by single people who behave wrong.

Summary:

Dekkers argument is that "should not have done" makes sense only in hindsight, with the information an outsider has after the error has happened. Most people want to do their job right, and thus they usually do what makes sense in their situation, weighting to options and the different pressures (economic, social, safety�).

Dekker argues, that "Human Error","Action X was wrong", "We should just have�" covers up the complex ways in which problematic situations build up and particularly, how the larger organizational system is tied to it: Errors can tell you more about the organization and its problems than about the "wrong" individual: A person used a wrong tool and as a result the machine broke? "Human Error" and shoddy work! Or rather it is because of this: The workers can go home when work is done, their tools are stored far away, so they clean up before the shift ends so they can leave. If now, unexpected, more work comes in, they will use whatever tools are there, otherwise everyone suffers a loss of wage. The latter is more interesting, more understandable than "human error". It is also less comfortable for the organization, which now plays a part in the problems rather than that one badly behaving person.

This is something that can't be quick-fixed by more procedures (how to interpret them?), technology (how does it work? When?) and some trainings (since the people close to the error might not be the source of it).

Instead, Dekker suggests a slow approach, starting with a detailed, observable-facts based account of the error with the interpretations of those who were close to it and why their actions made sense to them at that time, followed by a review of how the organization can learn from it.
Profile Image for Eugene Sedy.
19 reviews
May 30, 2021
The book's description says it well. In most large organizations, someone is often chosen to investigate problems, events, catastrophe's and the like, and just maybe, they are not well enough trained and experienced to provide meaningful information about the event. Or, pressures from one or more departments in the company cause the analysis to be shortened to the point of being fairly useless. When one hears that the cause was human error, you can be sure that somewhere along the line, someone got tired of looking deeper into the organization, or were prevented from doing so. Dekker's decades of experience are relied upon to provide numerous examples which should illuminate the reader and help them renew their interest in providing meaningful analyses of problems.
133 reviews1 follower
September 22, 2022
Note the quotation marks around "Human Error" in the title - that's what the book is about. A lot of safety or after-incident investigations result in a "human error" (or the equally pernicious "loss of situational awareness") attribution. The result is somebody gets fired or more safety posters put up in the workplace but the underlying problems persist. Sidney Dekker makes a case for a new view of safety - going deeper than the easy "human error" answer to uncover the corporate or systemic factors that contributed to the outcome. This book relies on incidents from aviation and transportation but the points made can also apply to IT professionals concerned with availability and reliability.
Profile Image for Miguel Palhas.
61 reviews7 followers
July 28, 2020
The first half of this book is highly insightful, explaining how to reason abou human errors, and frame them in a productive way, in order to learn and improve, rather than just blame whoever was at the center of it
It's a nice follow up to some of my previous readings on systems thinking, a topic I'm really getting into.

Towards the second half, the book goes into details about how organizations work, and , and some other details that are not as relevant to me in particular. It also felt a bit repetitive at some point, explaining the same ideas over and over. As such, I ended up only skimming through that
Profile Image for William "Spig".
134 reviews
November 27, 2020
Coming into the book I thought this was set up to be an analytical book on how to do an effective safety investigation. How to root out and deal with fringe behavior. This is a leadership book. I’ve talked much about being accountable foe leadership decision for everything under your umbrella but this book helped me go deeper and see the culture of doing business on an aircraft carrier is my responsibility. Fatigue and professionalism are battles but good solid leadership that owns first rather than punishments. Why does procedural non-compliance happen? My fault for not communicating well enough. Great book. Recommend to use for CO / XO school as well as ASO class.
Profile Image for Emanuele Gemelli.
611 reviews16 followers
May 31, 2021
This is what, the 5th book I read from Dekker and I believe it the least accomplished of the lot. As always, there is a lot of mean to roast, because the matter is huge and very complex and it is not easy to present this in a field guide. Do not get me wrong, there are some very useful sections in the book which can inspire some good ways to reduce the gap between the Sharp End and the Blunt End. Since I recommended two books of the Author to one of my colleagues, I recommend to read this and then "Drift into Failure", much more complex, but also more complete. Still a to-read for any safety professional
Profile Image for Dustan Woodhouse.
AuthorÌý8 books224 followers
July 2, 2021
Is an effect. Not the cause.

Takeaways;

Human error is a byproduct of the system in which we operate.

Nobody shows up to do things wrong, to make mistakes. People want to do things right.

Human error cannot be the conclusion to an investigation, it can only ever be the starting point.

Reconstruct the situation and circumstances in which the person ‘at fault� found themselves. Do your best to eliminate hindsight. It’s not a helpful viewpoint.

There’s rarely a quick fix.

Human errors are a byproduct of an often overly complex system. Thus the root of the problem is often complex� and there may be multiple incentives in play to minimize reporting of actual problems.

Profile Image for Gordon.
641 reviews
December 27, 2019
4.5 stars. This is relevant to all private and public organizations who deal with human error as a safety issue (although perhaps equally applicable to quality assurance in general). Extremely insightful and practical in its explanations on why things can go wrong, how we tend to focus on the irrelevant and induce greater error over time, and much more. It’s recommendations should be widely implemented in leader and manager education. I would highly recommend it to military leaders in particular.
Displaying 1 - 30 of 67 reviews

Can't find what you're looking for?

Get help and learn more about the design.