The entire April 2011 issue of Harvard Business Review is devoted to “the F word”! Failure, that is. How to understand it, learn from it, and recover from it…
Most people don’t learn from success (we simply neglect to investigate success), and don’t know what to learn from failure and how… In that context, this collection of articles, each focusing on a different aspect of failure is a great read.
In the lead article, ‘Understanding Failure’, Amy Edmondson argues that failure falls into three categories:
- Preventable ones in predictable operations, which usually involve deviating from spec
- Unavoidable ones in complex systems, which may arise from unique combinations of needs, people, and problems
- Intelligent ones at the frontier, where “good” failures occur quickly and on a small scale, providing the most valuable information
Amy goes on to recommend five practices for the leaders to build a psychologically safe
environment that learns from failure
- Frame the work accurately. Shared understanding of the type of failure that can occur in their context (the three categories mentioned above)
- Embrace Messengers, who come forward with bad news, concerns, and questions. They need to be rewarded rather than shot
- Acknowledge limits, by being open about what you don’t know, mistakes you’ve made, and what you can’t get done alone
- Invite participation from people to detect and analyse failures and promote intelligent experiments
- Set boundaries and hold people accountable. People feel safer when the leaders are clear as to what acts are blameworthy
Too often, pilots are conducted under optimal conditions rather than representative ones. Thus they can’t show what won’t work!
In another article, ‘Why Leaders Don’t Learn From Success’, Francesca Gino and Gary Pisano quip that “Success can make us believe that we are better decision makers than we are”. Fundamental attribution errors! And, over-confidence bias!
They recommend that companies should implement systematic after-action reviews to understand all the factors that led to a win, and test their theories by conducting experiments even if “it ain’t broke”…
In the article titled ‘How to Avoid Catastrophe’, Catherine Tinsley and others recommend a focus on near misses, as near misses preceded every disaster they studied, and – you guessed it – ignored! Apparently, quite perversely, near misses are often viewed as a sign that systems are resilient and working well 😉
These authors recommend seven strategies to recognise and learn from near misses:
- Be on increased alert when time or cost pressures are high
- Watch for deviations in operations from the norm
- Uncover the root causes of the deviations
- Make decision makers accountable for near misses
- Envision worst-case scenarios
- Be on the look-out for near-misses masquerading as successes
- Reward individuals for exposing near misses
Go, get it… There are many more interesting articles.