‹ Hawkins.io

The High Velocity Edge: Part 2

Mar 2021

This episode continues the series on Steven Spear’s 2009 book The High Velocity Edge.

The previous episode told the story of Mrs Grant. She died in hospital when she was given the wrong medicine. Individual workers did not killed Mrs Grant. The process that permitted workers to mistake one medicine for another killed Mrs Grant.

The staff could not see that one medicine could be mistaken for another and someone died as a result. All it would take is for someone to say “Hey, these vials could kill someone! Let’s do something about it.” Well, for that to even happen, organizations must see the problems.

This is the entry point into the four capabilities described in The High Velocity Edge. High velocity organizations identify problems, solve them, then stop them from reoccurring. Over time, this ongoing process leads to better results.

Spear introduces the four capabilities by describing multiple organizations. I’ll focus on two of my favorites: Alcoa and the US Navy nuclear propulsion program.

The year is 1987. Alcoa is an aluminum manufacturer. A career at Alcoa was risky. Workers had 40% chance of being seriously injured at least once over 25 years.

Research showed that workers were injured when situations may it easy to get hurt and hard to be safe.

No one at Alcoa believed they deliberately designed processes that made it hard to be safe. Of course they thought that! Because they did not know how to design processes that made it easy to be safe and hard to get hurt. They assumed they had done everything right from the get-go with all risks and safety concerns accounted for.

Alcoa hired Paul O’Neill as the new CEO in 1987. He began with an unorthodox goal: zero injuries to employees, contractors, and visitors. Why zero? Zero injuries means perfect processes based on perfect knowledge of how to do work. Anything more than zero implied missing knowledge or ignorance. Either must be rectified.

O’Neill established a policy that he must be personally notified within 24 hours of any injury or near miss. Injuries or near misses were happening up to seven times a day at this point.

The policy ensured fresh information flowed to the top of organization. People tend to forget details as time passes, but those details may be key to understanding what happened so they must reported as quickly as possible.

There was also a second rule: O’Neill must receive a report within 48 hours on the causes and what was being done to prevent an injury from happening again.

These two rules created a feedback loop. The reports revealed opportunities for improvement. People immediately put countermeasures in place to prevent future occurrences. Eventually the organization built up a body of knowledge on how to design processes that made it easy to be safe and hard to get hurt. That knowledge spread around Alcoa. Eventually a core cadre understood this process, then set about developing the same approach in others.

When O’Neil joined, the risk of injury over a 25 year period was 40%. 20 years later, it was less than 2%. These safety improvements were not at the expense of quality, yield, efficiency or even cost. In fact, the safety improvements actually made Alcoa more profitable.

The point is that no one can design a perfect system in advance while planning for every possibility. However, teams can discover great systems and keep learning how to improve them.

In other words: perfect systems are not designed. They are discovered.

Alcoa’s story demonstrates each of the four capabilities:

  1. Seeing problems as they occur
  2. Swarming and solving problems
  3. Spreading new knowledge
  4. Developing capabilities one, two, and three in others.

We’ll look at how Admiral Rickover demonstrated these same capabilities years earlier in the US Navy in the next episode.