In this series first installment (“The Problem With Flight Training,” March 2008), we identified a few of the systemic errors and omissions committed during flight training, and how they feed into typical aviation accidents. We dealt primarily with issues pertaining to the mechanics of flying an airplane. In this second of three articles, well look at some of the psychological aspects involved.
A lot of educational material has been generated in recent years on aeronautical decision making, hazardous attitudes and cockpit resource management. The FAA has been actively promoting the Perceive-Process-Perform (P-P-P) risk management decision path as well. Perceiving risk in the P-P-P model is aided with the PAVE checklist; processing levels of risk is facilitated with the CARE checklist; and performing risk management is prompted by the TEAM checklist (see the sidebar on page 6 for more).
Risk & Maneuvering Flight
Meanwhile, the concept of “teachable moments,” instances where students can plainly see how certain knowledge or skill components apply in real-world scenarios, goes hand-in-hand with the P-P-P approach. The goal is to enhance safety by expanding the pilots situational awareness, that is, the pilots mental ability to gather, integrate and then act on data from myriad (and often rapidly changing) information streams.
Recall the symbolic accidents introduced in Part I, both of which involved maneuvering flight:
Oroville, Calif., October 10, 2005: Two pilots were fatally injured in an inadvertent stall/spin. The NTSB probable cause determination: failure to maintain adequate airspeed while performing a 180-degree turnaround.
Manhattan, N.Y., October 11, 2006: Two pilots were killed when an airplane crashed into a high-rise apartment building. The probable causes: inadequate planning, judgment and airmanship in the performance of a 180-degree turn.
The AOPA Air Safety Foundation found that 27 percent of the fatal accidents over a recent, 10-year period occurred during the maneuvering phase of flight. Fatal maneuvering flight mishaps frequently culminated in a stall/spin. Forty-one percent of the fatal maneuvering crashes during the period 1993-2001, for example, ended in stall/spins. Almost three decades earlier, stall/spins occurred in 54 percent of fatal maneuvering accidents.
Another study found that turning flight preceded 60 percent of the fatal stall accidents in cases where the pre-accident maneuver was known. A Canadian study found that 59 percent of stall/spins there resulted from turning flight at slow airspeed. That study also assessed the risk of serious injury or death when turning back to the runway following an engine failure as eight times greater than when proceeding straight ahead.
Risk factors associated with maneuvering flight have been well known. But as pointed out last month, the fact that pilots generally have been left unaware of the elevators true control function unquestionably has compounded the risk involved when performing critical turning maneuvers.
The push to improve understanding and awareness of risk is certainly needed. How successful risk-management programs will be in the long run, however, depends partly on a) whether instructors will devote the necessary time and effort to the subject and, b) whether their charges will proactively use risk management tools throughout their flying careers. Yet the secret to success on this front (aside from ensuring that pilots are correctly taught what really controls what in an airplane) largely remains unaddressed: human nature.
We are not unprejudiced when it comes to evaluating risk. We have a built-in bias, tending to frame our analyses in terms of the obvious losses. We are also predisposed to selecting the alternative with a less certain outcome over one that has a more certain outcome, even if the alternative with less certainty poses a greater risk. Unless we are aware of these quirks and force ourselves to think in terms of maximizing benefits, the degree of certainty of the perceived losses will drive our decision making.
Engine failure on takeoff? Instinct immediately tells us we will damage the airplane if we continue straight ahead. Clearly the perception of “loss” associated with breaking the airplane is high. But if we turn around, we might avoid the damage. Even though the probability of successfully turning around is low, and failure means certain death and destruction, its the uncertainty that tempts pilots to gamble with it. This very uncertainty continues to give the turnaround maneuver an air of legitimacy it simply does not deserve.
Focusing on the obvious losses may be our native thought process, but its exactly the wrong thought process in an airplane. Reducing risk when flying is about maximizing survivability. From that viewpoint, the question always boils down to, “Which option provides the greatest opportunity for survival?” Proceeding straight ahead wins hands-down. So why does the obsession with turnarounds persist?
The Oroville Accident
The instructor in this accident had previously taken an emergency maneuvers course. Under “Critical Flight Operations-Engine Failure on Takeoff,” the lesson plan in that course contained the following ground-school bullet points:
Identify glide window-a slice of pie extending at a 45-to-60 degree angle on either side of the nose.
Controlled landing within this window (i.e., land straight ahead regardless of terrain or obstacles).
Best survivability = low, slow and in the landing attitude.
Turning back consumes lots of altitude; more bank means higher drag, AOA, stall speed; stall/spin potential; ground rush distraction.
The flying portion included several simulated engine failures during climbs followed by 180-degree turnarounds. The simulations, however, were conducted at altitude in the practice area. Consistent with the intent of the lesson, the purpose of these turnarounds was to highlight not only the altitude lost during the process, but also the difficulties involved in executing a 180-degree turn during an actual emergency.
Why did the Oroville instructor subsequently incorporate turnbacks from low altitude into the training program he was teaching? It would be easy to chalk it up to a tactical error on his part. He should have known better, right? But would such an assessment be completely fair?
Consider that half of the instructors 6600 flight hours came while flying in the military. Additionally, his airplane time was listed as 300 hours single-engine, but 3000 hours multi-engine. Could the overwhelming proportion of experience flying with the redundancy of multiple engines have influenced his thinking? Perhaps the instructor was conflicted about the turnaround issue, or unconvinced of its dangers? And what did the instructors nearly 1000-hour “student” believe about turnbacks? We cant know any of these answers for sure.
We do know, however, that turnbacks receive their share of attention. Admonitions against turning around are often offset by those who tacitly or overtly advocate its use. From last months article, we know of at least one FAA inspector who demanded its demonstration, with catastrophic consequences. We occasionally read articles in magazines and see Internet postings on how to perform the turnback. At least one commercially available video shows expert pilots completing turnbacks close to the ground.
The Possible “Impossible” Turn, published by no less a prestigious organization than AIAA, is the holy grail of turnback proponents. And two prominent aircraft owners groups promoted and taught low altitude turnbacks to their members during pilot proficiency clinics-until fatal accidents necessitated changes to those training policies.
No consistent, united front is presented on this matter. Thus, pilots largely are left to choose for themselves, as if a choice really exists. Until we stop treating the turnaround as though it deserves parity in the debate, pilots will continue to be in a quandary about their options. Consequently, unnecessary fatalities will continue to happen.
The Manhattan Crash
The verifiable flight time of the pilots involved in the Manhattan accident combined was nearly 1000 hours; the actual flight experience, in all likelihood, was higher. The CFI on board was among those who taught the PIC how to fly, but the accident flight was listed as a pleasure flight. Neither pilot had significant time in make and model.
The airspace boundaries of the East River VFR Corridor near the point of the crash: 2100 feet wide, capped at 1100 feet msl, with Class B airspace walling off the end straight ahead. The physical boundaries in the vicinity: water below, 1800-foot overcast above, low-rise skyline to the East (pilots right), high-rise skyline to the West (pilots left). What might have P-P-P risk management tools revealed during preflight planning? The sidebar on the opposite page provides one answer.
On one hand, risk analysis certainly would have justified the decision to abort flying up the corridor. On the other hand, the mitigation strategies could be deemed sufficient to offset the risks. Unfortunately, it appears only one strategy was executed-the airplane was flown at a relatively slow 97 knots.
Understanding human nature, we can make a reasonable guess about what may have transpired moments before the crash. Snap judgments would have been made at the terminus of the corridor. Slightly elevated stress levels would not have been unusual given the circumstances. The airplane was also flying 400 feet away from the East boundary. Was it a conscious decision not to hug the east side of the corridor? Or was the 13-knot, right crosswind aloft allowed to drift the airplane toward the center of the corridor? Regardless, 20 percent of the available maneuvering room was stockpiled on the right side of the airplane. Having to evaluate options quickly, it would have been natural to revert to framing risk in terms of the immediately apparent losses.
Risk assessment focusing on losses would have viewed options other than a left turn as a guaranteed loss-the “loss” being the inevitable FAA enforcement action for violating airspace. After all, the pilots earlier had acknowledged to ATC that they would remain clear of New York Class B. Moreover, the left turn-toward the high-rises to the west-was the expected maneuver. And even with the reduced turning distance, the outcome of this course of action would still be viewed with a degree of uncertainty: Maybe, just maybe, the airplane could be turned around. Mental inertia and the pressure to do something would further fixate thoughts on that left turn.
Instead, suppose all of the preflight risk mitigation strategies had been brought to bear? Hugging the East side of the corridor would have required crabbing into the crosswind. The degree of crab would have provided valuable information not only about the strength of the wind, but also about its impact on a left turn. Recall turns around a point from your student-training days, for instance: When turning to the downwind side, the bank must be steeper to prevent the wind from pushing the airplane downrange, away from the ground reference.
From the spot where the airplane actually started turning, the NTSB calculated a 50-degree bank and a sustained 1.6 g pull on the elevator (the turn control) would have just eked out the turn. To boot, another 10 degrees of bank and one-half g more pull would have been available to the pilots before stalling at 97 knots. That same steep turn, started instead hard against the East boundary, would have made it around with adequate margin. Even starting 400 feet away from the East boundary, the pilots could have angled the airplane to the right first, and then performed a teardrop turn to the left.
But why work so hard? Less demanding mitigation strategies could still have been employed, none of which would have required exotic feats of flying. For example, recognizing the crosswind and with the awareness that no other traffic had been reported in the area, the pilots could have tracked diagonally across the corridor to the West side, then turned right (easterly, into the wind). A minimum of 35 degrees of bank would have done the job in that case; the 40-45 degrees of bank used in the fatal left turn would have been perfect here.
Perhaps the lowest risk of all of the strategies (aside from deciding not to fly in the corridor in the first place) would have been to call up ATC and request to transition through Class B. That option was never off the table, even though the pilots had previously said they would remain clear of it. “Unless the situation dictates otherwise” was implicit in the earlier dialogue. In fact, ATC told the NTSB that such requests were not uncommon. And if ATC had denied the request for some odd reason, the ultimate authority for the operation of the airplane still rested with the pilots. A scenario with an unacceptable risk of death certainly qualified as an emergency; thus, the pilots could have acted in their own best interests. Any airspace issues could have been resolved once safely back on the ground.
As is true in so many cases, the Oroville and Manhattan accidents had nothing to do with whether or not the airplanes had glass panels or TCAS, or whether or not the airplanes were equipped with parachutes or were approved for aerobatics. Previous experience seemed inconsequential, too: in one cockpit, more than 1000 hours of flight time; in the other, at least 7600 hours. Pilot error was blamed. Institutionalized errors and omissions in stick-and-rudder and situational awareness skills, however, certainly played major roles.
Unless we better equip pilots to think and act in three dimensions, and unless we strive to overcome our own prejudices when it comes to assessing risk, the safety dividend promised by active risk management may not be as great as hoped.
In the last part of this series, well discuss problems at the flight instructor level and offer keys to a deeper understanding of flight.