Aeronautical decision-making, or ADM, wasn’t a big, formal deal back in the prehistoric times when I was doing my primary flight training. It was present, nonetheless, in many hangar-flying sessions and private discussions with other, more-experienced pilots. “Don’t run out of gas” and “Don’t mess around with weather” were chief among their warnings and war stories. Those cautions remain as valid today as they were then, of course.
While I’ve never run out of fuel, or even been forced to make a precautionary landing to top off, the same can’t be said of many other pilots. Conversely, I’ve often diverted well out of my way, delayed or cancelled trips thanks to weather I simply didn’t feel I could handle. As a result, it could be said I’ve made good decisions. But all that’s in the past—what about the next set of decisions you and I will make? Will they be good ones or bad ones? What goes into aeronautical decision-making and how can we improve it?
What Is ADM?
The FAA’s Pilot’s Handbook of Aeronautical Knowledge (PHAK), FAA-H-8083-25A, defines aeronautical decision-making thusly: “It is a systematic approach to the mental process used by pilots to consistently determine the best course of action in response to a given set of circumstances. It is what a pilot intends to do based on the latest information he or she has.” This definition acknowledges what I’ll call the “human element” in ADM—or any decision-making, for that matter—in which various factors are at play.
Because humans are involved, however, what might otherwise be considered an opportunity for Spock-like, unemotional logic is fraught with human characteristics: stress (both physical and psychological), fatigue and overconfidence likely among them. And those are only the internal factors affecting our decisions. External factors, those over which we may have no control, can include peer pressure, work schedules, weather, aircraft capabilities and much more. Considered broadly, these are known as “human factors” and much has been written, here and elsewhere, about ways to mitigate or eliminate their impact on how we make decisions.
If we looked at the worldwide airline industry and how it has improved its safety record in recent years, one of the things that would jump out at us is the high level of standardization and overwhelming lack of situations in which a single person makes a decision determining a flight’s outcome. Training in cockpit resource management—the effective use of all available resources, including the second pilot, dispatch personnel and the company’s operating standards—all have contributed to improvements in the safety of scheduled operations.
In personal aviation, we have little of that: We’re often the pilot, copilot, dispatch office, baggage handler, flight attendant and fueler. Instead of having someone against whom we can bounce a question or from whom we can seek advice, we’re often forced to make a decision on our own. That’s one thing that seemingly attracts people to become pilots—the individual nature of serving as pilot in command—but it hasn’t worked that well for us over the years: As the accident record shows, we’re not too good at it. Fuel management is an area in which we, frankly, suck. Avoiding poor weather—or, perhaps, refusing to fly into it—is another one where we fall down. The sidebars on pages 10 and 11, respectively, have some of the sad details.
Don’t Do Anything Stupid
In response, the FAA has, over the years, developed a detailed training regimen designed to instill in pilots ways to make reasonable and safe aeronautical decisions. Acronyms like PAVE, TEAM, CARE and DECIDE are a part of this effort, along with the “Five-P Checklist” and the “3P Model.” But I’ve never formally used these tools and don’t know anyone else who has, including primary students I’ve mentored over the years.
I’m not denigrating these training methods, which try to instill in pilots the knowledge and ability to make good decisions. They may work well for some and certainly serve as a means by which ADM tasks can be broken down into easily understood portions, dissected, understood and perhaps reassembled into empirically explaining the series of choices we make throughout a flight. And the scenario-based training methods implemented in publications like the FAA’s Risk Management Handbook (FAA-H-8083-2) or Advisory Circular AC 60-22, Aeronautical Decision Making, reasonably can be shown to have had a favorable impact over the years.
But they are a rather formal method of instilling in pilots the need to think about the consequences of their actions and make good decisions. I suppose “Don’t do anything stupid” only goes so far, especially when too many pilots are trained and earn their certificates in environments lacking experienced mentors or seasoned instructors.
One reason these methods aren’t readily seen at the FBO or in the pilot lounge is the time it takes to apply them correctly. That’s true whether we’re trying to meet a schedule, decide how much fuel we need or deal with an in-flight emergency: “In an emergency situation, a pilot might not survive if he or she rigorously applied analytical models to every decision made; there is not enough time to go through all the options.” The FAA’s PHAK admits familiarity is key to handling urgent situations successfully. Familiarity, of course, comes from experience, something we’ll get back to shortly.
Meanwhile, it seems to me peer pressure is a big reason we don’t see pilots sitting down in the FBO lounge to apply the FAA’s formal ADM criteria: They’d get laughed off the airport by other pilots. I call that phenomenon the “Right Stuff Syndrome,” where a Type A personality, perhaps not nearly as experienced as the subject pilot and certainly less conscientious about his or her flying, heaps ridicule on our hero pilot for sitting down to apply the FAA’s formal ADM criteria. The subject, who is at least less of a Type A personality than his antagonist, succumbs to perceived ridicule and foregoes applying the ADM criteria to his proposed flight. How do you think that will work out?
As the PHAK also states, “Good decisions result when pilots gather all available information, review it, analyze the options, rate the options, select a course of action and evaluate that course of action for correctness.” But, as that section on dynamic decision making also goes on to note, “In some situations, there isn’t always time to make decisions based on analytical decision-making skills.”
In other words, what the FAA refers to as “naturalistic” or “automatic decision-making”—the kind pilots often are forced to make in the cockpit’s dynamic environment—can be accomplished by quickly imagining “how one or a few possible courses of action in such situations will play out.” The PHAK also notes that in urgent situations, “Experts take the first workable option they can find.”
This concept starts to expose the unfortunate conundrum GA often faces when considering accident prevention: Experienced pilots are involved in relatively fewer accidents, but gaining that experience exposes them to a greater number of opportunities to have an accident. Somewhere between the two extremes—the inexperienced neophyte who must stop and consider in detail every aspect of a proposed flight and the “ace of the base” who’s been there, done that—lies the average pilot who has seen a lot but occasionally encounters a new weather, logistics or operational situation. Our average pilot may have seen low IFR, in-flight icing and/or squall lines before, but knew enough at the time to stay on the ground. Little by little, however, he or she gains enough experience—perhaps only by getting scared a few times—to more or less reflexively know what to expect from changing weather, mechanical issues and/or headwinds to quickly make appropriate decisions and live to fly another day.
Setting Experienced Examples
So…how do we instill in less-experienced pilots, those “who don’t know what they don’t know,” the wisdom gained from hundreds of hours and multiple years of flying? How to get them to the point where their “automatic decision-making” is safe, correct and accurate?
One place to start is stressing the FAA’s ADM curricula throughout all aspects of training, whether involving primary students or ATP candidates. Many instructors and schools do that already—in part because the FAA requires it in the practical test standards. But the rest of us, who aren’t training for a new ticket or rating and who have amassed those hundreds of hours and years of experience, also need to set examples by implementing ADM concepts more visibly.
As an example, I recently delayed a quick, late afternoon flight from one Florida coast to the other as a cold front moved through my departure airport. Weather at my destination was much improved from a few hours earlier and Nexrad clearly showed fast-moving cells bearing down on me. By killing some time at the FBO, I could wait out the storms and launch in the clearer air they left behind. The catches involved surface winds and darkness at my relatively narrow, short and poorly lit destination. But I’d much rather trade the uncontrollable risks imposed by thunderstorms for what I considered the lesser challenge of a nighttime crosswind landing. It all worked out, of course.
I waited until the storms passed before walking out to the airplane, pre-flighting and climbing in. As the engine warmed, I set up the cockpit and dialed in ground control, a familiar N-number and voice came over the headset. It was a close friend, similarly experienced and equipped, launching for home, as I was. He took off before me and, as I taxied out, I spotted another well-experienced friend landing. Each of us had evaluated the weather in the same way, deciding to wait it out before launching for home. We each had set good examples by making appropriate and safe decisions based on our experience, and completed our missions safely.