HBR.ORG APRIL 2011
REPRINT R1104G
FAILURE: LEARN FROM IT
How to Avoid
Catastrophe
Failures happen. But if you pay attention to
near misses, you can predict and prevent crises.
by Catherine H. Tinsley, Robin L. Dillon, and
Peter M. Madsen
This document is authorized for use only by Benjamin Lewis Co in Management Psychology – MGT-6026 – SHA1 at Hult International Business School, 2018.
How to Avoid
Catastrophe
Failure Learn from It
M
MOST PEOPLE think of “near misses” as harrowing
close calls that could have been a lot worse—when
a firefighter escapes a burning building moments
before it collapses, or when a tornado miraculously
veers away from a town in its path. Events like these
are rare narrow escapes that leave us shaken and
looking for lessons.
But there’s another class of near misses, ones that
are much more common and pernicious. These are
the often unremarked small failures that permeate
day-to-day business but cause no immediate harm.
People are hardwired to misinterpret or ignore the
warnings embedded in these failures, and so they often go unexamined or, perversely, are seen as signs
that systems are resilient and things are going well.
Yet these seemingly innocuous events are often harbingers; if conditions shift slightly, or if luck does not
intervene, a crisis erupts.
Consider the BP Gulf oil rig disaster. As a case
study in the anatomy of near misses and the consequences of misreading them, it’s close to perfect.
In April 2010, a gas blowout occurred during the cementing of the Deepwater Horizon well. The blowout
ignited, killing 11 people, sinking the rig, and triggering a massive underwater spill that would take
months to contain. Numerous poor decisions and
dangerous conditions contributed to the disaster:
Drillers had used too few centralizers to position the
pipe, the lubricating “drilling mud” was removed
too early, managers had misinterpreted vital test results that would have con!rmed that hydrocarbons
were seeping from the well. In addition, BP relied on
an older version of a complex fail-safe device called
a blowout preventer that had a notoriously spotty
track record.
Why did Transocean (the rig’s owner), BP executives, rig managers, and the drilling crew overlook
the warning signs, even though the well had been
plagued by technical problems all along (crew members called it “the well from hell”)? We believe that
the stakeholders were lulled into complacency by
a catalog of previous near misses in the industry—
successful outcomes in which luck played a key role
in averting disaster. Increasing numbers of ultradeep wells were being drilled, but significant oil
spills or fatalities were extremely rare. And many
Gulf of Mexico wells had su”ered minor blowouts
during cementing (dozens of them in the past two
decades); however, in each case chance factors—
favorable wind direction, no one welding near the
leak at the time, for instance—helped prevent an
explosion. Each near miss, rather than raise alarms
Failures happen. But if you pay attention to
near misses, you can predict and prevent crises.
by Catherine H. Tinsley, Robin L. Dillon, and
Peter M. Madsen
2 Harvard Business Review April 2011 This document is authorized for use only by Benjamin Lewis Co in Management Psychology – MGT-6026 – SHA1 at Hult International Business School, 2018.
PHOTOGRAPHY: STEPHEN WEBSTER
Robin L. Dillon (rld9@
georgetown.edu) is an
associate professor at
Georgetown’s McDonough
School of Business.
Peter M. Madsen
(petermadsen@byu.edu)
is an assistant professor at
Brigham Young University’s
Marriott School of Management, in Provo, Utah.
Catherine H. Tinsley
(tinsleyc@georgetown.edu)
is an associate professor at
Georgetown’s McDonough
School of Business, in
Washington, DC.
FOR ARTICLE REPRINTS CALL 800-988-0886 OR 617-783-7500, OR VISIT HBR.ORG
April 2011 Harvard Business Review 3 This document is authorized for use only by Benjamin Lewis Co in Management Psychology – MGT-6026 – SHA1 at Hult International Business School, 2018.
and prompt investigations, was taken as an indication that existing methods and safety procedures
worked.
For the past seven years, we have studied near
misses in dozens of companies across industries
from telecommunications to automobiles, at NASA,
and in lab simulations. Our research reveals a pattern: Multiple near misses preceded (and foreshadowed) every disaster and business crisis we studied,
and most of the misses were ignored or misread.
Our work also shows that cognitive biases conspire
to blind managers to the near misses. Two in particular cloud our judgment. The !rst is “normalization of deviance,” the tendency over time to accept
anomalies—particularly risky ones—as normal.
Think of the growing comfort a worker might feel
with using a ladder with a broken rung; the more
times he climbs the dangerous ladder without incident, the safer he feels it is. For an organization,
such normalization can be catastrophic. Columbia
University sociologist Diane Vaughan coined the
phrase in her book The Challenger Launch Decision
to describe the organizational behaviors that allowed a glaring mechanical anomaly on the space
shuttle to gradually be viewed as a normal flight
risk—dooming its crew. The second cognitive error
is the so-called outcome bias. When people observe
successful outcomes, they tend to focus on the results more than on the (often unseen) complex processes that led to them.
Recognizing and learning from near misses isn’t
simply a matter of paying attention; it actually runs
contrary to human nature. In this article, we examine near misses and reveal how companies can detect and learn from them. By seeing them for what
they are—instructive failures—managers can apply
their lessons to improve operations and, potentially,
ward o” catastrophe.
Roots of Crises
Consider this revealing experiment: We asked
business students, NASA personnel, and spaceindustry contractors to evaluate a !ctional project
manager, Chris, who was supervising the launch of
an unmanned spacecraft and had made a series of
decisions, including skipping the investigation of a
potential design #aw and forgoing a peer review, because of time pressure. Each participant was given
one of three scenarios: The spacecraft launched
without issue and was able to transmit data (success
outcome); shortly after launch, the spacecraft had a
problem caused by the design #aw, but because of
the way the sun happened to be aligned with the vehicle it was still able to transmit data (near-miss outcome); or the craft had a problem caused by the #aw
and, because of the sun’s chance alignment, it failed
to transmit data and was lost (failure outcome).
How did Chris fare? Participants were just as
likely to praise his decision making, leadership abilities, and the overall mission in the success case as
in the near-miss case—though the latter plainly succeeded only because of blind luck. When people observe a successful outcome, their natural tendency
is to assume that the process that led to it was fundamentally sound, even when it demonstrably wasn’t;
hence the common phrase “you can’t argue with
success.” In fact, you can—and should.
Organizational disasters, studies show, rarely
have a single cause. Rather, they are initiated by
the unexpected interaction of multiple small, often
seemingly unimportant, human errors, technological failures, or bad business decisions. These latent
errors combine with enabling conditions to produce
a signi!cant failure. A latent error on an oil rig might
be a cementing procedure that allows gas to escape;
enabling conditions might be a windless day and a
welder working near the leak. Together, the latent error and enabling conditions ignite a deadly !restorm.
Near misses arise from the same preconditions, but
in the absence of enabling conditions, they produce
only small failures and thus go undetected or are
ignored.
Latent errors often exist for long periods of time
before they combine with enabling conditions to
produce a signi!cant failure. Whether an enabling
condition transforms a near miss into a crisis generally depends on chance; thus, it makes little sense to
try to predict or control enabling conditions. Instead,
companies should focus on identifying and !xing laEvery strike
brings me
closer to the
next home run.”
BABE RUTH
BASEBALL PLAYER
FOCUS ON FAILURE
PHOTOGRAPHY: GETTY IMAGES
4 Harvard Business Review April 2011
LEARNING FROM FAILURE HOW TO AVOID CATASTROPHE
This document is authorized for use only by Benjamin Lewis Co in Management Psychology – MGT-6026 – SHA1 at Hult International Business School, 2018.
Idea in Brief
Most business failures,
such as engineering disasters,
product malfunctions, and
PR crises, are preceded by
near misses—close calls
that, had it not been for
chance, would have been
worse.
tent errors before circumstances allow them to create a crisis.
Oil rig explosions o”er a dramatic case in point,
but latent errors and enabling conditions in business
often combine to produce less spectacular but still
costly crises—corporate failures that attention to latent errors could have prevented. Let’s look at three.
Bad Apple. Take Apple’s experience following its launch of the iPhone 4, in June 2010. Almost
immediately, customers began complaining about
dropped calls and poor signal strength. Apple’s initial
response was to blame users for holding the phone
the wrong way, thus covering the external antenna,
and to advise them to “avoid gripping [the phone] in
the lower left corner.” When questioned about the
problem by a user on a web forum, CEO Steve Jobs
!red back an e-mail describing the dropped calls as a
“non issue.” Many customers found Apple’s posture
arrogant and insulting and made their displeasure
known through social and mainstream media. Several !led class action lawsuits, including a suit that
alleged “fraud by concealment, negligence, intentional misrepresentation and defective design.” The
reputation crisis reached a crescendo in mid-July,
when Consumer Reports declined to recommend
the iPhone 4 (it had recommended all previous versions). Ultimately Apple backpedaled, acknowledging software errors and offering owners software
updates and iPhone cases to address the antenna
problem.
The latent errors underlying the crisis had long
been present. As Jobs demonstrated during a press
conference, virtually all smartphones experience
a drop in signal strength when users touch the external antenna. This flaw had existed in earlier
iPhones, as well as in competitors’ phones, for years.
The phones’ signal strength problem was also well
known. Other latent errors emerged as the crisis
gained momentum, notably an evasive PR strategy
that invited a backlash.
That consumers had endured the performance issues for years without signi!cant comment was not
a sign of a successful strategy but of an ongoing near
miss. When coupled with the right enabling conditions—Consumer Reports’ withering and widely
quoted review and the expanding reach of social
media—a crisis erupted. If Apple had recognized
consumers’ forbearance as an ongoing near miss
and proactively fixed the phones’ technical problems, it could have avoided the crisis. It didn’t, we
suspect, because of normalization bias, which made
the antenna glitch seem increasingly acceptable;
and because of outcome bias, which led managers
to conclude that the lack of outcry about the phones’
shortcomings reflected their own good strategy—
rather than good luck.
Speed Warning. On August 28, 2009, California
Highway Patrol o$cer Mark Saylor and three family
members died in a !ery crash after the gas pedal of
the Lexus sedan they were driving in stuck, accelerating the car to more than 120 miles per hour. A 911
call from the speeding car captured the horrifying
moments before the crash and was replayed widely
in the news and social media.
Consumers’ enduring the iPhone’s problems
for years without comment was a sign not of a
solid strategy but of an ongoing near miss.
Managers often misinterpret these warning signs
because they are blinded by
cognitive biases. They take
the near misses as indications that systems are working well—or they don’t notice
them at all.
Seven strategies can
help managers recognize
and learn from near misses.
Managers should: (1) be on
alert when time or cost pressures are high; (2)watch for
deviations from the norm;
(3) uncover the deviations’
root causes; (4) hold themselves accountable for near
misses; (5) envision worstcase scenarios; (6)look for
near misses masquerading
as successes; and (7)reward
individuals for exposing
near misses.
FOR ARTICLE REPRINTS CALL 800-988-0886 OR 617-783-7500, OR VISIT HBR.ORG
April 2011 Harvard Business Review 5 This document is authorized for use only by Benjamin Lewis Co in Management Psychology – MGT-6026 – SHA1 at Hult International Business School, 2018.
Toyota Pedal Problems JetBlue Tarmac Trouble
WEATHER DELAYS OF TWO HOURS OR MORE
PER 1,000 FLIGHTS
2.5
2
1.5
1
0.5
2003 2004 2005 2006 2007
AMERICAN
JETBLUE
SOUTHWEST
US AIRWAYS
UNITED
DELTA
ALASKA AIRLINES
SOURCE DEPARTMENT OF TRANSPORTATION’S BUREAU OF
TRANSPORTATION STATISTICS
Up to this point, Toyota, which makes Lexus, had
downplayed the more than 2,000 complaints of unintended acceleration among its cars it had received
since 2001. The Saylor tragedy forced the company
to seriously investigate the problem. Ultimately,
Toyota recalled more than 6 million vehicles in late
2009 and early 2010 and temporarily halted production and sales of eight models, sustaining an estimated $2 billion loss in North American sales alone
and immeasurable harm to its reputation.
Complaints about vehicle acceleration and speed
control are common for all automakers, and in most
cases, according to the National Highway Traffic
Safety Administration, the problems are caused by
driver error, not a vehicle defect. However, beginning in 2001, about the time that Toyota introduced
a new accelerator design, complaints of acceleration
problems in Toyotas increased sharply, whereas
such complaints remained relatively constant for
other automakers (see the exhibit “Toyota Pedal
Problems”). Toyota could have averted the crisis if
it had noted this deviation and acknowledged the
thousands of complaints for what they were—near
misses. Here, too, normalization of deviance and
outcome bias, along with other factors, conspired
to obscure the grave implications of the near misses.
Only when an enabling condition occurred—the Saylor family tragedy and the ensuing media storm—did
the latent error trigger a crisis.
Jet Black and Blue. Since it began operating, in
2000, JetBlue Airways has taken an aggressive approach to bad weather, canceling proportionately
fewer flights than other airlines and directing its
pilots to pull away from gates as soon as possible in
severe weather so as to be near the front of the line
when runways were cleared for takeo”—even if that
meant loaded planes would sit for some time on the
tarmac. For several years, this policy seemed to work.
On-tarmac delays were not arduously long, and cusPERCENTAGE OF CUSTOMER COMPLAINTS HAVING TO DO
WITH SPEED CONTROL
Errors in process or
product design are often
ignored, even when the
warning signs clearly
call for action. The more
times small failures occur
without disaster, the more
complacent managers
become.
50
40
30
’95 ’96 ’97 ’98 ’99 ’00 ’01
20
10
’02 ’03 ’04 ’05 ’06 ’07 ’08 ’09 ’10
HONDA ACCORD
TOYOTA CAMRY
SOURCE NATIONAL HIGHWAY TRAFFIC SAFETY ADMINISTRATION
tomers were by and large accepting of them. Nonetheless, it was a risky strategy, exposing the airline
to the danger of stranding passengers for extended
periods if conditions abruptly worsened.
The wake-up call came on February 14, 2007. A
massive ice storm at New York’s John F. Kennedy International Airport caused widespread disruption—
but no carrier was harder hit than JetBlue, whose
assertive pilots now found themselves stuck on the
tarmac (literally, in some cases, because of frozen
wheels) and with no open gates to return to. Distressed passengers on several planes were trapped
for up to 11 hours in overheated, foul-smelling cabins
with little food or water. The media served up angry
!rst-person accounts of the ordeal, and a chastened
David Neeleman, JetBlue’s CEO, acknowledged on
CNBC, “We did a horrible job, actually, of getting our
customers o” those airplanes.” The airline reported
canceling more than 250 of its 505 #ights that day—
a much higher proportion than any other airline. It
lost millions of dollars and squandered priceless
consumer loyalty.
For JetBlue, each of the thousands of #ights that
took off before the competition during previous
weather delays was a near miss. As the airline continued to get away with the risky strategy, managers
who had expressed concern early on about the way
the airline handled #ight delays became complacent,
even as long delays mounted. Indeed, the proportion of JetBlue weather-based delays of two hours
or more roughly tripled between 2003 and 2007,
whereas such delays remained fairly steady at other
major U.S. airlines (see the exhibit “JetBlue Tarmac
Trouble”).
Rather than perceiving that a dramatic increase
in delays represented a dramatic increase in risk,
JetBlue managers saw only successfully launched
#ights. It took an enabling condition—the ferocious
ice storm—to turn the latent error into a crisis.
6 Harvard Business Review April 2011
LEARNING FROM FAILURE HOW TO AVOID CATASTROPHE
This document is authorized for use only by Benjamin Lewis Co in Management Psychology – MGT-6026 – SHA1 at Hult International Business School, 2018.
Recognizing and
Preventing Near Misses
Our research suggests seven strategies that can help
organizations recognize near misses and root out the
latent errors behind them. We have developed many
of these strategies in collaboration with NASA—an
organization that was initially slow to recognize the
relevance of near misses but is now developing enterprisewide programs to identify, learn from, and
prevent them.
1
Heed High Pressure. The greater the pressure
to meet performance goals such as tight schedules, cost, or production targets, the more likely
managers are to discount near-miss signals or misread them as signs of sound decision making. BP’s
managers knew the company was incurring overrun
costs of $1 million a day in rig lease and contractor
fees, which surely contributed to their failure to recognize warning signs.
The high-pressure e”ect also contributed to the
Columbia space shuttle disaster, in which insulation
foam falling from the external fuel tank damaged the
shuttle’s wing during lifto”, causing the shuttle to
break apart as it reentered the atmosphere. Managers had been aware of the foam issue since the start
of the shuttle program and were concerned about it
early on, but as dozens of #ights proceeded without
serious mishap, they began to classify foam strikes
as maintenance issues—rather than as near misses.
This classic case of normalization of deviance was
exacerbated by the enormous political pressure the
agency was under at the time to complete the International Space Station’s main core. Delays on the
shuttle, managers knew, would slow down the space
station project.
Despite renewed concern about foam strikes
caused by a particularly dramatic recent near miss,
and with an investigation under way, the Columbia
took o”. According to the Columbia Accident Investigation Board, “The pressure of maintaining the
#ight schedule created a management atmosphere
that increasingly accepted less-than-speci!cation
performance of various components and systems.”
When people make decisions under pressure,
psychological research shows, they tend to rely on
heuristics, or rules of thumb, and thus are more
easily in#uenced by biases. In high-pressure work
environments, managers should expect people to
be more easily swayed by outcome bias, more likely
to normalize deviance, and more apt to believe that
their decisions are sound. Organizations should
AMERICAN
JETBLUE
SOUTHWEST
encourage, or even require, employees to examine
their decisions during pressure-filled periods and
ask, “If I had more time and resources, would I make
the same decision?”
2Learn from Deviations. As the Toyota and
JetBlue crises suggest, managers’ response
when some aspect of operations skews from
the norm is often to recalibrate what they consider
acceptable risk. Our research shows that in such
cases, decision makers may clearly understand the
statistical risk represented by the deviation, but
grow increasingly less concerned about it.
We’ve seen this e”ect clearly in a laboratory setting. Turning again to the space program for insight,
we asked study participants to assume operational
control of a Mars rover in a simulated mission. Each
morning they received a weather report and had to
decide whether or not to drive onward. On the second day, they learned that there was a 95% chance
of a severe sandstorm, which had a 40% chance of
causing catastrophic wheel failure. Half the participants were told that the rover had successfully
driven through sandstorms in the past (that is, it had
emerged unscathed in several prior near misses);
the other half had no information about the rover’s
luck in past storms. When the time came to choose
whether or not to risk the drive, three quarters of
the near-miss group opted to continue driving; only
13% of the other group did. Both groups knew, and
indeed stated that they knew, that the risk of failure
was 40%—but the near-miss group was much more
comfortable with that level of risk.
Managers should seek out operational deviations
from the norm and examine whether their reasons
for tolerating the associated risk have merit. Questions to ask might be: Have we always been comfortable with this level of risk? Has our policy toward this
risk changed over time?
3 Uncover Root Causes. When managers identify deviations,
their reflex is often to
correct the symptom rather
than its cause. Such
was Apple’s response
when it at first
suggested that
customers address the antenna
problem by changing the way they held the iPhone.
FOR ARTICLE REPRINTS CALL 800-988-0886 OR 617-783-7500, OR VISIT HBR.ORG
April 2011 Harvard Business Review 7 This document is authorized for use only by Benjamin Lewis Co in Management Psychology – MGT-6026 – SHA1 at Hult International Business School, 2018.
But near misses are relevant to managers at all levels in their day-to-day
work, as they can also presage lesser
but still consequential problems.
Research on workplace safety, for example, estimates that for every 1,000
near misses, one accident results in
a serious injury or fatality, at least 10
smaller accidents cause minor injuries, and 30 cause property damage
but no injury. Identifying near misses
and addressing the latent errors that
give rise to them can head off the
even the more mundane problems
that distract organizations and sap
their resources.
Imagine an associate who misses
deadlines and is chronically late for
client meetings but is otherwise a high
performer. Each tardy project and late
arrival is a near miss; but by addressing the symptoms of the problem—
covering for the employee in a variety
of ways—his manager is able to prevent clients from defecting. By doing
this, however, she permits a small but
significant erosion of client satisfaction, team cohesiveness, and organizational performance. And eventually,
a client may jump ship—an outcome
that could have been avoided by
attending to the near misses. Your
organization needn’t face a threat as
serious as an oil spill to benefit from
exposing near misses of all types and
addressing their root causes.
NASA learned this lesson the hard way as well, during its 1998 Mars Climate Orbiter mission. As the
spacecraft headed toward Mars it drifted slightly
off course four times; each time, managers made
small trajectory adjustments, but they didn’t investigate the cause of the drifting. As the $200 million
spacecraft approached Mars, instead of entering into
orbit, it disintegrated in the atmosphere. Only then
did NASA uncover the latent error—programmers
had used English rather than metric units in their
software coding. The course corrections addressed
the symptom of the problem but not the underlying
cause. Their apparent success lulled decision makers into thinking that the issue had been adequately
resolved.
The health care industry has made great strides
in learning from near misses and o”ers a model for
others. Providers are increasingly encouraged to
report mistakes and near misses so that the lessons
can be teased out and applied. An article in Today’s
Hospitalist, for example, describes a near miss at
Delnor-Community Hospital, in Geneva, Illinois.
Two patients sharing a hospital room had similar
last names and were prescribed drugs with similarsounding names—Cytotec and Cytoxan. Confused
by the similarities, a nurse nearly administered
one of the drugs to the wrong patient. Luckily, she
caught her mistake in time and !led a report detailing the close call. The hospital immediately separated the patients and created a policy to prevent
patients with similar names from sharing rooms in
the future.
4 Demand Accountability. Even when
people are aware of near misses, they tend
to downgrade their importance. One way to
limit this potentially dangerous e”ect is to require
managers to justify their assessments of near misses.
Remember Chris, the fictional manager in our
study who neglected some due diligence in his supervision of a space mission? Participants gave him
equally good marks for the success scenario and the
near-miss scenario. Chris’s raters didn’t seem to see
that the near miss was in fact a near disaster. In a
continuation of that study, we told a separate group
of managers and contractors that they would have
to justify their assessment of Chris to upper management. Knowing they’d have to explain their rating to
the bosses, those evaluating the near-miss scenario
judged Chris’s performance just as harshly as did
those who had learned the mission had failed—recognizing, it seems, that rather than managing well,
he’d simply dodged a bullet.
5 Consider Worst-Case Scenarios. Unless
expressly advised to do so, people tend not
to think through the possible negative consequences of near misses. Apple managers, for example, were aware of the iPhone’s antenna problems
but probably hadn’t imagined how bad a consumer
backlash could get. If they had considered a worstcase scenario, they might have headed o” the crisis,
our research suggests.
In one study, we told participants to suppose that
an impending hurricane had a 30% chance of hitting
their house and asked them if they would evacuate.
Just as in our Mars rover study, people who were told
that they’d escaped disaster in previous near misses
were more likely to take a chance (in this case, opting
to stay home). However, when we told participants
to suppose that, although their house had survived
previous hurricanes, a neighbor’s house had been
hit by a tree during one, they saw things di”erently;
this group was far more likely to evacuate. Examining events closely helps people distinguish between
near misses and successes, and they’ll often adjust
their decision making accordingly.
Little Near Misses and Small-Scale Failures
We’ve used dramatic cases such as oil spills and shuttle
disasters to illustrate how near misses can foreshadow
huge calamities.
LEARNING FROM FAILURE HOW TO AVOID CATASTROPHE
8 Harvard Business Review April 2011 This document is authorized for use only by Benjamin Lewis Co in Management Psychology – MGT-6026 – SHA1 at Hult International Business School, 2018.
Managers in Walmart’s business-continuity of-
!ce clearly understand this. For several years prior
to Hurricane Katrina, the o$ce had carefully evaluated previous hurricane near misses of its stores
and infrastructure and, based on them, planned
for a direct hit to a metro area where it had a large
presence. In the days before Katrina made landfall
in Louisiana, the company expanded the sta” of its
emergency command center from the usual six to
10 people to more than 50, and stockpiled food, water, and emergency supplies in its local warehouses.
Having learned from prior near misses, Walmart
famously outperformed local and federal officials
in responding to the disaster. Said Je”erson Parish
sheri” Harry Lee, “If the American government had
responded like Walmart has responded, we wouldn’t
be in this crisis.”
6Evaluate Projects at Every Stage. When
things go badly, managers commonly conduct postmortems to determine causes and
prevent recurrences. When they go well, however,
few do formal reviews of the success to capture its
lessons. Because near misses can look like successes,
they often escape scrutiny.
The chief knowledge o$cer at NASA’s Goddard
Space Flight Center, Edward Rogers, instituted a
“pause and learn” process in which teams discuss
at each project milestone what they have learned.
They not only cover mishaps but also expressly examine perceived successes and the design decisions
considered along the way. By critically examining
projects while they’re under way, teams avoid outcome bias and are more likely to see near misses
for what they are. These sessions are followed by
knowledge-sharing workshops involving a broader
group of teams. Other NASA centers, including the
Jet Propulsion Laboratory, which manages NASA’s
Mars program, are beginning similar experiments.
According to Rogers, most projects that have used
the pause-and-learn process have uncovered near
misses—typically, design #aws that had gone undetected. “Almost every mishap at NASA can be traced
to some series of small signals that went unnoticed
at the critical moment,” he says.
7 Reward Owning Up. Seeing and attending
to near misses requires organizational alertness, but no amount of attention will avert
failure if people aren’t motivated to expose near
misses—or, worse, are discouraged from doing so. In
many organizations, employees have reason to keep
quiet about failures, and in that type of environment
they’re likely to keep suspicions about near misses
to themselves.
Political scientists Martin Landau and Donald
Chisholm described one such case that, though it
took place on the deck of a warship, is relevant to
any organization. An enlisted seaman on an aircraft
carrier discovered during a combat exercise that he’d
lost a tool on the deck. He knew that an errant tool
could cause a catastrophe if it were sucked into a jet
engine, and he was also aware that admitting the mistake could bring a halt to the exercise—and potential
punishment. As long as the tool was unaccounted for,
each successful takeo” and landing would be a near
miss, a lucky outcome. He reported the mistake, the
exercise was stopped, and all aircraft aloft were redirected to bases on land, at a signi!cant cost.
Rather than being punished for his error, the seaman was commended by his commanding officer
in a formal ceremony for his bravery in reporting
it. Leaders in any organization should publicly reward staff for uncovering near misses—including
their own.
TWO FORCES conspire to make learning from near
misses di$cult: Cognitive biases make them hard
to see, and, even when they are visible, leaders tend
not to grasp their signi!cance. Thus, organizations
often fail to expose and correct latent errors even
when the cost of doing so is small—and so they miss
opportunities for organizational improvement before disaster strikes. This tendency is itself a type
of organizational failure—a failure to learn from
“cheap” data. Surfacing near misses and correcting
root causes is one the soundest investments an organization can make.
HBR Reprint R1104G
FOCUS ON FAILURE
SOURCE ECONOMIC POLICY INSTITUTE
WAGES
U.S. productivity grew
at a healthy clip from
1973 to 2009. Median
wages didn’t come
close to matching
those gains.
PRODUCTIVITY
MEDIAN FAMILY INCOME
’47 ’51 ’55 ’59 ’63 ’67 ’71 ’75 ’79 ’83 ’87 ’91 ’95 ’99 ’03 ’07
FOR ARTICLE REPRINTS CALL 800-988-0886 OR 617-783-7500, OR VISIT HBR.ORG
April 2011 Harvard Business Review 9 This document is authorized for use only by Benjamin Lewis Co in Management Psychology – MGT-6026 – SHA1 at Hult International Business School, 2018.
-research paper writing service