Man Against Machine

The debate between human weather observers and automated weather has no clear winner

0

One of the results of poorly assessing the risk involved with flying is reflected by the number of weather-related accidents that consistently grace NTSB logs, even in the face of widely available real-time meteorological data. A study of the accident record demonstrates that the raging debate about human versus mechanical observers largely misses the point.

Sure, human observers are superior in most respects and automated weather sometimes gives goofy and erroneous reports. On the other hand, automated weather stations allow observations from far more locations than was affordable for staffed stations. Both people and machines have strengths and weaknesses, but the more immediate point is that many of the pilots involved in weather-related accidents simply did not use the data that was available, regardless of the source.

Perhaps automated weather reports are sometimes ignored because the technology is mistrusted. That seems to be a reasonable explanation for at least some accidents that could have been avoided if only the pilots had properly evaluated the automated information readily available. The technology at work is hardly rocket science, but then, maybe sometimes it should be.

The Automated Weather Observing Systems (AWOS) and the Automated Surface Observing Systems (ASOS) differ primarily in their origin rather than in their capabilities. As a general rule, AWOS are installed at airports that did not have a staffed weather observation facility and are typically funded through non-federal sources. (Confusing this neat distinction is the presence of older federal AWOS, first installed in 1988). ASOS, on the other hand, is a joint program between the FAA, the National Weather Service and the Department of Defense. The program uses federal funds to install the equipment at airports that, in most cases, were previously blessed with human observers.

The Hardware Solution
AWOS is a suite of sensors put in place to measure and broadcast weather data to meteorologists, pilots and anybody else interested in Mother Natures most recent practical joke. An important point to remember is that AWOS does not predict weather, but simply reports current measurements.

To keep things complicated, six varieties of AWOS are available, each with different capabilities. AWOS I measures wind (speed, gust, direction and variable direction), temperature, dew point, altimeter setting and density altitude. AWOS II adds visibility, variable visibility, precipitation and day/night (for those not sure if the sun is above or below the horizon). AWOS III further adds sky condition, cloud height and cloud type.

Beyond this, the acronyms start piling up. AWOS III-P adds to AWOS III precipitation discrimination and AWOS III-T adds thunderstorm and lightning detection. Finally, in a fit of alphabetical excess, AWOS III-P-T combines precipitation and lightning detection with all of the other AWOS III functions.

ASOS includes all the functions of an AWOS III, but in addition has sensors to measure type of precipitation, freezing rain and thunderstorms. In sum ASOS is similar to AWOS III-P-T with freezing rain detection thrown in for good measure. Since ASOS and AWOS function similarly, we can take the ASOS sensor group as a general example for all automated weather stations.

Eight sensors make up the complete package of measuring devices, and these are:

• The rain gauge, known affectionately as the tipping bucket
• The hygrothermometer, a fancy term for a temperature and dew point sensor
• The present-weather identifier
• Wind speed and direction sensors
• Ceilometer to measure cloud height
• Freezing rain sensor
• Thunderstorm sensor
• Visibility sensor.

Automated weather systems use fundamentally different methods of data collection than do human observers. For temperature, pressure, dew point, wind and precipitation levels, both automated and human observers use a fixed location and time averaging. But for sky condition, visibility and present weather, humans use a fixed time, spatial-averaging technique while automated observation uses a fixed location, time-averaging method.

The time-averaging methods of AWOS and ASOS were designed to provide pilots with real-time weather observations when and where they need it, without interruption 24/7, and right at the runway touchdown zone.

Location is particularly important for visibility measurement, since the touchdown zone is of intense interest to a pilot in low weather. A human observer would likely be measuring visibility a fair distance from the touchdown zone. An additional advantage of automation is that visibility measurements are consistent from place to place, while reports from different human observers are more variable and subjective.

Finally, for practical reasons automated weather stations can be put in place for less money than human observers in most cases. As a result, automated weather provides greater coverage than would otherwise be possible. The first ASOS was installed in Topeka, Kansas in 1991. Since then over 900 systems have been installed, from Alaska to the Everglades.

The Human Factor
In spite of these positive attributes, automated weather is not without problems. Sometimes at critical moments ASOS will report unavailable, which is not desirable when the tower is closed and cumulus are building rapidly.

Perhaps the most insidious problem is inaccurate readings. The chilled mirror method of measuring dew point is plagued with problems. Wind sensors can be frozen in place, rendering them useless. Ceiling measurements cover only a small slice of sky, with the result that absurd reports can be generated. A huge storm can be lurking near the airport with reports of no clouds below 12,000 feet or, conversely, a small cloud by chance just over the airport will yield a pessimistic report in what is in fact good weather.

Progress Marches On
An effort is underway to upgrade existing facilities to overcome some of these shortcomings, including using a new technology to measure dew point, installing wind sensors with no moving parts, and adopting new technologies to identify types of precipitation. New 25,000 foot ceilometers are in the works that would function better in adverse weather conditions.

The debate about the pros and cons of human and automated observation rages on with no end in sight. In the midst of this debate, there is little data to support a conclusion that one is better than the other. Anecdotes abound of pilots making visual approaches when the automated weather was reporting ceilings near ILS minimums or automated weather reporting a very different wind direction than the wind sock.

The accident record does contain some spotty information in which weather reporting systems have been a factor. Failure to obtain a weather briefing has been a relatively common factor, but in those cases it doesnt matter how the data would have been collected.

Consider three accidents in which a discrepancy was blamed between the automated weather report and the actual weather. In the first, a pilot in a Maule at Englewood, Colo., said neither ATIS, AWOS nor the control tower mentioned wind gusts, but a gust caused the airplane to ground loop. At the time of the accident AWOS was reporting 8-knot winds. In a second case, the AWOS in Provo, Utah, gave a report of calm winds, but the pilot of a Cessna 172 blamed a sudden gust of wind when he was about to touch down for his loss of control and eventual nose-over. Finally, the AWOS at Somerset, Pa., reported a ceiling of 1,600 feet overcast, visibility 3.5 miles and winds from 260 degrees at 10 knots. But the pilot, who crashed when he was unable to find the airport, said that local weather reported 4,000 overcast, 5 miles visibility and winds 180 degrees at 10 knots gusting to 15.

Discrepancies such as these seem to be the exception as the cause of accidents, however, given the paucity of similar examples in the database.

On the other hand, a review of the NTSB records shows that a more common class of accidents are caused by pilots who ignore accurate weather data. Accidents that have at least some connection to AWOS or ASOS fall into three main categories. First, pilots crash attempting to land downwind when the weather station clearly indicated a dangerous situation. Second, pilots run off the runway when the weather station reported a crosswind that exceeded either the aircrafts or the pilots capabilities. Third, pilots fly VFR into IMC even after receiving an automated weather update warning of poor visibility or low ceilings.

Dont Shoot the Weatherman
All of these are well-known causes of grief, but what is surprising is that the weather information was readily available to the pilot prior to metal meeting ground.

Sometimes the weather reports paint a bleak picture the pilot chooses to ignore. A depressingly typical report demonstrates the problem of VFR into IMC when AWOS makes clear the pilots folly. In one accident, an airplane crashed when the non-instrument rated pilot departed from St. Augustine, Fla., when the St. Augustine AWOS was reporting a ceiling of 500 feet and visibility 5 miles. What possesses a non-instrument rated pilot to depart into a ceiling of 500 feet?

In another case near Mekoryuk, Alaska, an experienced pilot took off into weather that AWOS determined to include obscurations, low scattered and broken clouds and visibility as low as 1 mile. The pilot said he would go take a look. The plane hit a hill at 500 feet about 4 miles from the airport. The pilot, who survived, stated he got tripped up in a white-out.

Wind-related accidents are perhaps even more difficult to fathom given the availability of wind information. In La Grande, Ore., AWOS reported winds from 190 degrees at 29 knots with gusts to 40 knots. Yet a small Piper nosed over on the runway preparing to take off when a swirling gust of wind caused the plane to cartwheel. This report is typical of many scattered throughout the NTSB records in which pilots attempt to land or depart with wind gusts reported to be 20, 30 or 40 knots. Clearly the accident pilots are ignoring good data that should warn them off.

The danger of landing downwind also seems to be lost on a number of pilots. In a typical example, a pilot in Oelwein, Iowa, elected to land on the 4,001-foot-long runway 13 when AWOS was reporting winds from 290 degrees at 11 knots. Not surprisingly, the aircraft departed the end of the runway when attempting to land.

Surely the system of automated weather collection is flawed in many respects, and the judgment and experience of human observers will be impossible to duplicate by a group of sensors for the foreseeable future. But that reality becomes less significant when the accident record demonstrates that pilots are simply ignoring the information that is already available and largely accurate.

Until the data available through AWOS and ASOS are used by pilots to their fullest – if not perfect – potential, the differences between human and automated weather observations remain largely academic.


Also With This Article
Click here to view “Automated Weather Observations Depend on Sensorama.”

-by Jeff Schweitzer

Jeff Schweitzer, an active aviation writer, owns a Piper Mirage.

LEAVE A REPLY

Please enter your comment!
Please enter your name here