Air France 447
What happens when pilots do not understand their own automation
By VastBlue Editorial · 2026-03-26 · 18 min read
Series: What Really Happened · Episode 5
Departure
At 19:29 UTC on 31 May 2009, Air France Flight 447 pushed back from Gate 7 at Rio de Janeiro's Galeão International Airport. The aircraft was an Airbus A330-203, registration F-GZCP, bound for Paris Charles de Gaulle with an estimated flight time of approximately eleven hours. On board were 216 passengers and twelve crew members. The captain was Marc Dubois, 58 years old, with 10,988 total flying hours and 1,747 hours on the A330. The two first officers were David Robert, 37, with 6,547 hours and 4,479 on type, and Pierre-Cédric Bonin, 32, the least experienced of the three with 2,936 total hours and 807 on the A330.
The weather briefing for the flight was not unusual for the season. The Intertropical Convergence Zone — the ITCZ, a belt of convective weather that girdles the equator where the trade winds from each hemisphere collide — would be crossed during the night. Satellite imagery showed clusters of cumulonimbus activity along the route, some with cloud tops reaching above 50,000 feet, but this was normal for a South Atlantic crossing in June. Every crew flying the route encountered the ITCZ. The question was never whether there would be storms, but where the gaps between them would be.
The aircraft departed normally. At 01:55 UTC, now 1 June, Captain Dubois left the flight deck to take his scheduled rest period. Robert, the more experienced of the two first officers, took the left seat. Bonin took the right. This arrangement was standard — crew rest rotation on long-haul flights is required by regulation. What it meant in practice was that the two pilots now in control of the aircraft had neither the full authority nor the accumulated pattern-recognition of the captain.
Into the Weather
By 02:00 UTC the aircraft was cruising at Flight Level 350 — approximately 35,000 feet — and approaching the ITCZ. The weather radar on the A330 showed significant convective returns ahead. At 02:02, the crew requested and received clearance to climb to Flight Level 380 — 38,000 feet — to improve their ride above the weather. At this altitude the aircraft was operating close to its performance ceiling. The margin between the maximum operating speed and the aerodynamic stall speed — known informally as the "coffin corner" — was narrow. This is normal for high-altitude cruise in a transport-category aircraft, but it means the aircraft has less tolerance for disruptions to its normal flight condition. A sudden deceleration of even a few knots, or a momentary increase in angle of attack from turbulence, can push the wing closer to the stall boundary than it would ever approach at lower altitudes.
At approximately 02:08, the aircraft's outside air temperature was around minus 44 degrees Celsius. The crew had dimmed the cockpit lights and were flying into an area of significant convective activity. Other flights in the region were deviating around the storm cells. An Air France A340 that had departed São Paulo shortly before AF447 reported severe turbulence and deviation. The crew of AF447 made lateral deviations of approximately twelve degrees to the left to avoid the worst of the weather returns on their radar, but they did not deviate far enough. They were flying into a mesoscale convective system — a vast, organised complex of thunderstorms — that extended across hundreds of kilometres of ocean.
At 02:10:05 UTC, the aircraft entered a zone of high-altitude ice crystals. Within seconds, the pitot tubes — the external probes that measure airspeed by comparing the pressure of air forced into them against the static ambient pressure — began to ice over. The A330 has three pitot tubes, each feeding an independent air data system. All three became obstructed within seconds of each other. When the pitot tubes stopped providing reliable airspeed data, the aircraft's flight computers could no longer determine how fast the aircraft was going. Airspeed is not a convenience metric in aviation. It is the foundational parameter on which nearly every automated function depends. Without it, the computers cannot calculate thrust targets, cannot maintain flight envelope protections, cannot command the autopilot to hold altitude or track.
What happened next occurred with terrible speed. At 02:10:05 the autopilot disconnected. The auto-thrust disconnected. The flight director bars — the symbols on the primary flight display that show the pilots where to point the aircraft to follow the commanded flight path — disappeared. A cascade of warnings and alerts filled the cockpit. The aircraft transitioned from Normal Law to Alternate Law.
This transition was the hinge on which the entire disaster turned. In Normal Law — the standard operating mode of the Airbus fly-by-wire system — the flight computers provide what is known as flight envelope protection. The pilot commands inputs through the sidestick, but the computers interpret those inputs and refuse to execute any command that would exceed the aircraft's structural or aerodynamic limits. In Normal Law, it is essentially impossible to stall an Airbus. The computers will not allow it. The pilot can pull the sidestick all the way back to the stop and hold it there, and the aircraft will pitch up to but not beyond the angle of attack at which the wing would stop generating lift.
In Normal Law, it is essentially impossible to stall an Airbus. The computers will not allow it. But when the flight computers lost reliable airspeed data, they could no longer guarantee the integrity of their own protections. They gave control back to the pilots — pilots who had been trained in a world where the computer always said no.
BEA Final Report, 2012
In Alternate Law, those protections are partially or fully removed. The computers still translate sidestick inputs into control surface movements, but they no longer enforce the envelope. There is no automatic stall protection. There is no automatic pitch trim at extreme attitudes. The aircraft will do what the pilot tells it to do, even if what the pilot tells it to do is aerodynamically lethal. The logic behind this design is sound: if the computers cannot trust their own data, they should not enforce limits based on that data. But it means that in precisely the moment when the pilots most need help, the system that normally provides it steps back.
Four Minutes
Bonin was the pilot flying. His immediate response to the autopilot disconnection was to pull back on his sidestick. The nose pitched up. This is, in one sense, a trained response — when an aircraft encounters turbulence or an unexpected upset at high altitude, a gentle nose-up input can help maintain altitude. But Bonin's input was not gentle, and he did not release it. Within seconds the aircraft's pitch attitude had increased to approximately eleven degrees nose up. The aircraft began to climb. Its airspeed began to decay.
At 02:10:16 — eleven seconds after the autopilot disconnected — the stall warning activated for the first time. The synthetic voice sounded twice: "STALL, STALL." The cricket — a loud, repeating aural warning — accompanied it. The aircraft's angle of attack was increasing beyond the point at which the wing could sustain lift. The correct response to a stall warning in any aircraft, in every training programme ever written, is immediate and unambiguous: push the nose down, reduce the angle of attack, accelerate, recover. Bonin did not push the nose down. He continued to hold the sidestick back.
There is a critical detail embedded in the A330's stall warning logic that would prove devastating. The warning system is designed to suppress stall alerts when the measured airspeed falls below a minimum threshold of approximately 60 knots, on the assumption that an indicated airspeed that low is unreliable and the warning would be spurious. As the aircraft decelerated further, the airspeed readings intermittently dropped below this threshold. When they did, the stall warning stopped. When the airspeed readings momentarily recovered — as ice cleared from the pitot tubes in brief, partial bursts — the warning resumed. This created a perverse pattern: when Bonin relaxed his back-pressure slightly and the nose dipped, the airspeed readings rose, the data became more valid, and the stall warning sounded. When he pulled back harder and the aircraft decelerated further into the deep stall, the airspeed dropped below the suppression threshold and the warning went silent. The warning was loudest when the aircraft was closest to recovery and silent when it was deepest in the stall.
Robert, in the left seat, was trying to make sense of the situation. He could see the attitude indicator showing an extreme nose-up pitch. He could see the altitude increasing and then, as the aircraft lost energy, beginning to decrease. He could see the vertical speed indicator showing a climb rate that made no aerodynamic sense for the power setting. But Robert had a problem that was more fundamental than instrument interpretation: he did not know what Bonin was doing with his sidestick.
The Airbus sidestick design differs from the traditional control column found in Boeing and most other transport aircraft. In a Boeing, the control columns are mechanically linked — when the captain pushes forward, the first officer's column moves forward too. Both pilots can feel, at all times, what the other is doing. In an Airbus, the sidesticks are independent. They do not move in response to the other pilot's inputs. There is a priority system — either pilot can press a button to take sole authority — and there is a visual indication when both sticks are being moved simultaneously, but there is no tactile feedback. Robert could not feel that Bonin was holding his stick fully aft. And in the noise, confusion, and cognitive overload of an aircraft in crisis at 02:10 in the morning over the black Atlantic, Robert did not immediately register the visual "DUAL INPUT" warning.
At 02:11:43, Robert said: "What's happening?" He received no coherent answer. At 02:11:45, he said: "We've lost the speeds." He understood the pitot failure. But he still did not understand the aircraft's actual state. The altitude was now decreasing rapidly. The aircraft was in an aerodynamic stall, falling at a rate that would eventually exceed 10,000 feet per minute, with its nose still pitched approximately sixteen degrees above the horizon. It was, in aerodynamic terms, an aircraft falling flat — high nose attitude, massive sink rate, no effective lift from the wings.
At 02:11:32, Captain Dubois re-entered the cockpit. He had been asleep for approximately one hour and thirty-six minutes. He arrived to an aircraft in crisis, with alarms sounding, two first officers who were clearly struggling, and no immediately obvious explanation for what was happening. The cockpit voice recorder captured his first words: "What's happening?" Robert replied: "We don't know what's happening. We've tried everything." Dubois had been conscious for perhaps ten seconds. He had no mental model of how the aircraft had arrived at its current state. He needed to build one from scratch, in real time, in a cockpit that was screaming at him.
The captain re-entered the cockpit after one hour and thirty-six minutes of rest. His first words: "What's happening?" The reply: "We don't know what's happening. We've tried everything."
CVR transcript, BEA Final Report
It took Dubois twenty seconds to realise the aircraft was stalling. At 02:13:40, he said: "Ten degrees pitch attitude." But he did not take control. He did not press the priority button on his sidestick. He attempted to coach from behind: "Go down... descend..." His instructions were ambiguous and incomplete. At no point did he explicitly command "push the nose down, we are stalled." At no point did he physically take over the aircraft. The social dynamics of the cockpit — the hierarchy, the reluctance to seize control from a colleague, the assumption that the pilot flying must have some understanding of the situation that you lack — operated as a barrier even as the ocean rushed upward.
At 02:14:28, just over four minutes after the autopilot disconnected, the aircraft struck the surface of the Atlantic Ocean at a forward speed of approximately 107 knots, a vertical descent rate of 10,912 feet per minute, and a pitch attitude of approximately 16.2 degrees nose up. The angle of attack at impact was approximately 35 degrees — nearly three times the angle at which an A330 wing ceases to produce useful lift. The aircraft was destroyed on impact. All 228 people on board were killed. The wreckage sank to the floor of the Atlantic at a depth of approximately 3,900 metres.
The Search
The aircraft disappeared from radar contact at 02:14 UTC. It had been out of range of land-based radar for hours; its last known position was transmitted via the Aircraft Communications Addressing and Reporting System (ACARS), which had sent a series of automated fault messages in the minutes before the crash. The last ACARS message was transmitted at 02:14. After that, silence. The aircraft was somewhere in the equatorial Atlantic, in an area where the ocean floor is characterised by the rugged terrain of the Mid-Atlantic Ridge, with depths exceeding 4,000 metres.
Surface debris was found by the Brazilian Navy starting on 6 June, approximately 650 kilometres north-northeast of Fernando de Noronha. But the flight recorders were on the ocean floor. Without them, investigators could determine that the aircraft had crashed, but not why. The underwater search lasted nearly two years. Two phases using sonar and autonomous underwater vehicles found nothing. It was not until March–April 2011 that the wreckage field was located by REMUS 6000 vehicles operated by the Woods Hole Oceanographic Institution. The recorders were recovered on 1 May 2011. Despite two years at 3,900 metres depth and a pressure of approximately 390 atmospheres, the memory modules were intact. The data was readable.
What the recorders revealed was not a story of catastrophic mechanical failure, nor of a weather event beyond the aircraft's capability, nor of a sudden structural breakup at altitude. It was a story of three pilots — trained, certified, current — who did not understand what their aircraft was doing, did not understand what they were doing to their aircraft, and never bridged the gap between the two.
The Gap Between Automation and Understanding
The BEA — the Bureau d'Enquêtes et d'Analyses pour la Sécurité de l'Aviation Civile, the French accident investigation authority — published its final report in July 2012. The report was 224 pages long and contained 41 findings and 25 safety recommendations. Its analysis went far beyond the proximate cause of the crash and into the systemic conditions that produced it.
The immediate cause was clear: the aircraft entered an aerodynamic stall at high altitude, and the crew did not recover from it. The pitot tube icing was the triggering event, but the pitot failure alone was not lethal. The Airbus A330 is perfectly capable of being flown manually without reliable airspeed indication. The procedure is straightforward: maintain a known pitch attitude and a known power setting, and the aircraft will fly at approximately the correct speed. It will not stall. It will not exceed its structural limits. It will continue to fly while the crew diagnoses the airspeed discrepancy and, in most cases, waits for the pitot tubes to clear. The icing events that had affected earlier flights with the same Thales AA probes had lasted, on average, less than a minute.
The deeper cause was the crew's inability to recognise and recover from the stall. The BEA identified several contributing factors, but the thread that connected them all was the relationship between the pilots and their automation. The A330's fly-by-wire system, operating in Normal Law, provides a level of protection that is genuinely remarkable. It is, in almost all circumstances, impossible to stall the aircraft. Pilots flying the A330 in Normal Law do not need to worry about exceeding the angle of attack. The computer will prevent it. This protection is so effective, so reliable, and so deeply integrated into the aircraft's handling characteristics that it fundamentally shapes how pilots relate to the aircraft. They learn to operate within the protected envelope. They learn to trust the computer's limits. What they do not always learn — what is very difficult to learn and even more difficult to retain — is how to fly when the protections are removed.
The investigation revealed that Bonin had almost certainly never experienced a stall, or a near-stall, in any high-fidelity training scenario that replicated high-altitude conditions in an A330. The airline's training programme, like most programmes worldwide, focused on systems management, standard operating procedures, and the monitoring of automation. High-altitude stall recovery was not part of recurrent training. The assumption was that the flight envelope protections made it unnecessary. This assumption was sound in Normal Law. It was fatal in Alternate Law.
There was also the question of startle effect. Bonin was flying at two o'clock in the morning, in darkness, in turbulence, at high altitude, with no external visual references. When the autopilot disconnected with a cavalry charge alarm and the flight directors vanished, the sudden transition from passive monitoring to active hand-flying would have produced a surge of adrenaline and a narrowing of cognitive focus. Research in human factors consistently shows that under acute stress, human beings revert to their most deeply embedded responses. If the deeply embedded response is "the computer will not let me stall," then the intermittent sounding and silencing of the stall warning becomes not a call to action but a source of confusion.
The dual-input problem compounded everything. Had Robert been able to feel Bonin's full-aft sidestick input, the stall might have been identified and corrected within seconds. In a Boeing with mechanically linked control columns, Bonin's continuous nose-up command would have been immediately apparent. In the Airbus, it was invisible. Robert made several nose-down inputs on his own sidestick, but the system's default behaviour when both sidesticks are active is to sum the inputs algebraically. Bonin's sustained nose-up command partially or fully cancelled Robert's corrections. The net effect was an aircraft that would not respond as Robert expected, because his inputs were being negated by inputs he could not see or feel. Neither pilot explicitly called out their control inputs. The "DUAL INPUT" alert appeared on the flight displays, but in the cacophony of alarms it was either not noticed or not understood.
- The Thales AA pitot probes were susceptible to high-altitude ice crystal icing, and Airbus had recommended replacement before the accident occurred.
- The flight envelope protections in Normal Law had created a training and operational culture in which high-altitude stall recognition and recovery was not practised.
- The stall warning logic suppressed the warning at very low airspeeds, producing a counter-intuitive pattern where the warning sounded during partial recovery and went silent during deep stall.
- The Airbus sidestick design provided no tactile cross-cockpit feedback, making it impossible for one pilot to feel the other's control inputs.
- Crew resource management broke down: no pilot explicitly called out the aircraft's attitude, its descent rate, or the fundamental problem of excessive angle of attack.
- Captain Dubois re-entered the cockpit too late to build situational awareness and did not take positive control of the aircraft.
What Changed
The AF447 investigation produced changes across the aviation industry that are still unfolding. The most immediate was the replacement of pitot tubes. Airbus mandated the installation of Thales BA or Goodrich probes on all A330 and A340 aircraft. The BA probes had a redesigned inlet that was substantially more resistant to ice crystal accretion. This addressed the triggering event, but the BEA was clear that the pitot failure alone should not have been fatal. The deeper changes addressed the human-automation relationship.
ICAO and EASA issued new requirements for upset prevention and recovery training (UPRT). Airlines were required to introduce high-altitude stall recognition and recovery into recurrent training programmes. Simulators were modified to provide more realistic representations of high-altitude aerodynamic behaviour. Pilots would now practise scenarios they had previously been told they would never encounter.
Airbus revised the stall warning logic, re-evaluating the low-airspeed inhibition threshold that had silenced the warning during the deepest phase of AF447's stall. The flight director logic was also modified, ensuring that directors would reappear more quickly after an unreliable airspeed event. Air France overhauled its training programme, introducing specific modules for unreliable airspeed events, high-altitude manual flying, and stall recovery, alongside revised crew resource management training emphasising the explicit verbalisation of control inputs and positive transfer of control.
The broader industry grappled with a question that AF447 had posed in the starkest possible terms: what is the role of the pilot when the automation is doing the flying, and what skills must the pilot retain for the moment when the automation stops? This question is not unique to aviation. It arises wherever complex, safety-critical systems are automated to the point where the human operator's primary role becomes monitoring rather than controlling. The paradox is consistent: the more reliable the automation, the less practice the human gets at manual operation; the less practice the human gets, the less capable they are when the automation fails; and the automation always, eventually, encounters a situation it was not designed for.
The more reliable the automation, the less practice the human gets at manual operation. The less practice, the less capable they are when the automation fails. And the automation always, eventually, encounters a situation it was not designed for.
Earl Wiener, human factors researcher
Earl Wiener, the human factors researcher whose work on automation and complacency anticipated many of the AF447 findings by decades, had articulated what he called "Wiener's Laws of aviation." Among them: "Every device creates its own opportunity for human error." And: "Exotic devices create exotic problems." The Airbus fly-by-wire system, with its envelope protection and its sidestick architecture, was an exotic device of extraordinary sophistication. It made routine flight safer than any previous generation of aircraft. And on a night over the Atlantic, it created an exotic problem that its pilots — trained for the automated world, not for the raw one — were unable to solve.
The 228 people who died on Air France Flight 447 did not die because the aircraft was broken. They did not die because the weather was unflyable. They did not die because the automation failed in a way that was unrecoverable. They died because three pilots, in a cockpit designed to make their jobs easier, encountered a situation that required them to understand the fundamental physics of flight — angle of attack, lift, energy management — at a level that their training, their experience, and their relationship with their own aircraft had not prepared them for. The pitot tubes cleared within approximately one minute. By that time, the aircraft had lost over 9,000 feet and was falling at a rate from which recovery was theoretically still possible but practically, given the crew's mental model, unreachable.
The wreckage of F-GZCP lies on the floor of the Atlantic Ocean, 3,900 metres below the surface, in the dark, in the cold, in the silence. Above it, every night, aircraft cross the ITCZ on the same route. Their pilots are now trained differently. Their pitot tubes are now different probes. Their training programmes now include scenarios that, before 1 June 2009, were considered unnecessary. The gap between automation and understanding has been narrowed. Whether it has been closed is a question that only the next failure will answer.
Sources
- BEA Final Report — Accident to Airbus A330-203, F-GZCP, 1 June 2009 — https://www.bea.aero/docspa/2009/f-cp090601.en/pdf/f-cp090601.en.pdf
- BEA Interim Reports (2009–2011) — https://www.bea.aero/en/investigation-reports/notified-events/detail/event/accident-to-the-airbus-a330-203-registered-f-gzcp-operated-by-air-france-on-01-06-2009-in-the-atlantic-ocean/
- Palmer, B. — Understanding Air France 447 (2013) — https://www.amazon.com/Understanding-Air-France-447-Palmer/dp/0989785726
- ICAO Circular 347 — Manual on Aeroplane Upset Prevention and Recovery Training — https://www.icao.int/safety/loci/Pages/default.aspx
- Wiener, E. & Curry, R. — Automation in the Cockpit: Flight Deck Automation (1980) — https://ntrs.nasa.gov/citations/19800013796
- EASA Airworthiness Directive 2009-0195-E — Pitot Probes — https://ad.easa.europa.eu/ad/2009-0195-E
- Dekker, S. — The Field Guide to Understanding Human Error (2014) — https://www.routledge.com/The-Field-Guide-to-Understanding-Human-Error/Dekker/p/book/9781472439055