Editor's Log

July 2018 Issue


In a submission to our Unicom department on the opposite page, Jeff Schweitzer makes the point that risk “should only be taken when the potential benefit clearly exceeds the risk, and that cannot be known if half the equation is excluded from analysis.” In other words, to properly assess risk, we also have to balance the equation by calculating the benefits of taking that risk. It’s really a risk/benefit analysis instead. He’s not wrong.

People wouldn’t fly personal aircraft—or participate in many other activities—if there weren’t benefits. That’s human nature. Some benefits we seek by taking risks are intangible and hard to quantify. Others can be readily identified and weighted. It’s a calculus we all employ daily in mundane ways. However, the problem isn’t that we fail to assess benefits when we analyze risk. Instead, the issue is the inaccurate values we assign on both sides of the equation.

Humans just aren’t very good at honestly analyzing and assigning value to intangibles. It’s one of the fundamental problems with a risk/benefit analysis, no matter its formality: We often fail to properly weight the variables on both sides of the equation. Whether we do so out of an honest inability to assign realistic values or by putting our thumb on the scale, the result is the same.


Think about it in the context of a pilot evaluating the risks and benefits of a proposed flight: When confronted with a severe weather risk that can’t easily be mitigated, the pilot has some choices. One is to cancel the flight, with one benefit being the ability to fly again some other time. Another is to reschedule or reroute the flight to avoid the risk, with the benefit of arriving safely, but perhaps later than planned. A third choice is to accept the risk in the belief the benefit of arriving at the destination as planned outweighs it. A good choice might be to park the airplane and buy an airline ticket. A bad choice might be to accept the risk and conduct the flight in the belief the consequences of flying into bad weather won’t materialize or won’t be as bad as forecast.

This latter choice is a textbook case of rationalization, which can be defined as an attempt to justify decisions we know are wrong. And rationalization impacts both sides of the risk/benefit equation. For example, we can believe soft spots in a line of thunderstorms will provide a clear path to the other side, or decide our skills and equipment minimize the risk. The benefit of arriving at our destination on time can be combined with a plan to brag about our daring among other pilots.

The numerical flight risk assessment tools Bob Wright wrote about in May aren’t immune to rationalization. After assigning risk to a proposed flight and finding the results are unacceptable, we’re inclined to go back and change the values we initially assigned until the equation balances.

When the results of a risk/benefit equation aren’t in our favor, the natural reaction is to change the variables so they more closely align with our desired outcome. The NTSB’s files often describe the result.