An Aviation Safety Staff Report
Most of us are familiar with the term graveyard spiral and the vast majority of us know how it can happen. The screaming spiral dive to oblivion typically begins with a subtle and unobtrusive entry into a bank that is below ones ability to detect angular acceleration. But what if we told you that there is more to the graveyard spiral than most people realize? This does not involve any esoteric physics factoids, and were not going to subject you to a biomedical lecture either; this is just something that could save your life someday, simply because you armed yourself with a little knowledge.
Each year, a not-insignificant number of pilots dissipate the last few minutes of their lives tracing the path of this graceful curve as they futilely attempt to maintain altitude after they lose outside visual reference, instead hastening their terrifying plunge into eternity via that aptly named ignominious exit. The fact is, a substantial percentage are condemned not just by ignorance, but also by a misguided precedent in the very design and presentation of the gyroscopic instrument itself. In this case, the culprit is the artificial horizon, otherwise known as the attitude indicator (AI).
Setting The Stage
A little knowledge can be a dangerous thing. What brought down John F. Kennedy, Jr.s Saratoga in July 1999, however, probably was something besides hubris and having more money than brains.
This same human factors issue has also claimed its share of jaded professionals, as well as many an innocent passenger. There was an Air India 747 that dove into the Arabian Sea on the evening of January 1, 1978, despite tens of thousands of hours of crew experience and triple-redundant systems, and it did so probably because, in a moment of bewilderment, its captain confused the moving horizon line with his airplanes wings, instead of the gyroscopically stabilized backdrop to their relative motion. The same phenomenon is also suspected of being involved in the crash of USAir 427, the B-737-300 accident near Pittsburgh in 1994, as well as others.
Heres the setup: Start with a moments inattention and an airplane that slowly rolls at a rate below the vestibular threshold of its human pilot. The next time the pilot sees the attitude indicator, it shows a bank. He or she rapidly applies aileron to recover, but it feels as though the airplane is rolling into a bank, instead of out of one. This illusion is actually compelling enough, under circumstances of stress or confusion, to precipitate a reversal of previously ingrained reactions.
But because of the inherent conflicts between comparatively unerring gyroscopic rigidity and our own limited sensory abilities, it is critical that an attitude indicator shows the clearest possible relationship between the airplane and horizon symbols and what happens in the real world. The predominating school of thought at the time these instruments were first designed and manufactured (two-thirds of a century ago) was that the artificial horizon be a porthole through which we see a symbolic analog of the horizon (e.g., we roll right, the horizon rolls left). Research has shown however that our instinctive expectation is that a display element should move in the same direction as our control input.
Of course, fixating on one instrument, especially in a high-stress situation, has gotten plenty of pilots-and their passengers-into a great deal of trouble. Any time a pilot flying solely by reference to instruments becomes confused about the aircrafts attitude, he should cross-reference the AI to other instruments. For example, if the AI is showing a right bank, and both the turn coordinator and directional gyro show a constant heading, something is amiss.
There are two sides to this story: In the study of man-machine interactions, the proper modeling of human cognitive functioning is essential for creating displays that are the least likely to be misread and thus contribute to loss of situational awareness-and loss of life. In the case of an AI, if the earth is chosen as the point of reference, the horizon is stable and it is the aircraft pictogram that moves. In human factors language, this is called an exocentric view.
The alternative, which is where the aircraft pictogram stands still and the horizon rotates, is called an egocentric view. This view provides a more direct cognitive mapping of the previously mentioned porthole theory.
However and as also mentioned, this violates an almost instinctive expectation regarding compatibility of display elements representing what actually moves in the real world. (Dont get hung up on the ten-dollar buzzwords, by the way. All exocentric means is that the action of an object is based on what can be seen from another objects point of view, and egocentric just means the action of an object is based on what can be seen from that objects point of view.)
So what could they do about it? Ever since Elmer Sperry patented the artificial horizon in Great Britain in 1911 (and which incidentally was first demonstrated in an aircraft back in 1916, not in that now more famous flight in 1929 by then-Lt. Jimmy Doolittle), arguments have gone back and forth as to which was better. Those in favor of the horizon bar remaining faithful with the real worlds horizon won out and thats what we fly with today.
However, the perceptual motor problems associated with it have been the subject of much experimental attention. The best solution that has emerged thus far is, in its simplest form, a combination of the two views.
If it were incorporated in an electromechanical display, it would look like the bottom image in the sidebar at left; it shows the same bank, to the right. This represents a compromise between the orientational and dynamic models: for rapid rolls above a certain threshold, where perception of motion dominates, the moving part principle applies, and the airplane moves. For gradual turns, the horizon moves.
What To Do?
So, what can you do about this phenomenon? Well, for now, nothing-just become aware of it, and stay aware of it. Keep it in the back of your mind the next time youre in a milk bottle.
For those non-instrument pilots out there, if you ever find yourself in the soup, remember what you just read. If by now youve begun to entertain the notion that this isnt some some Lilliputian pettifoggery over which end of a hard-boiled egg to crack open, well, bravo! This is important material here.
And dont forget to cross-check your AI with the other instruments in your panel, like the turn coordinator/turn and bank and/or the heading indicator.Fixating on any single instrument, especially one with a history, will usually take you only as far as the accident scene.