I read that there’s lesson to learn from the Maneuvering Characteristics Augmentation System (MCAS) experience that plagued Boeing. And led to fatalities. There’s a lot that has been written about the tragic saga. Much of great value.
It’s true. Aviation advances as the community learns lessons from incidents and accidents. Yes, there’s variability in the effectivity of this learning process. Occasions when oceans are written about one case and dozens of others are given an inappropriate light touch[1]. A trustworthy centralised repository of safety recommendations from published aviation accident reports is a useful tool. A point of reference. In the first months of the European Aviation Safety Agency (EASA) in Cologne, back in 2005, my team established such a database. It’s only possible to track the follow-up of key safety recommendation if there’s a well-maintained administrative system. Safety is often about the intelligent use of data.
Cockpit design, and the human factors issues involved, are without doubt one of the most critical parts of an aircraft. Society is not ready for fully autonomous passenger carrying aircraft. I believe it will happen, in decades to come but the horizon is way off. For certain types of vehicles, autonomy must be the solution given that flight control is beyond human capacities. Here’s I’m thinking mostly of hypersonic and space flight.
For a pilot to exercise responsibility for a flight there’s a need to have, at least, a basic understanding of what a machine is doing. In past times of strings and wires and clockwork instruments that understanding was ingrained knowledge gained from training and experience.
Future aircraft systems will not be easily described as functional blocks that perform well understood and dedicated functions. An autopilot, an autothrottle, autobraking, a flight management system, even an engine. Hybridisation is coming.
That does not mean a pilot must understand the inner working for a multicore microprocessor or complex software algorithm. Flight test pilots being the exception, in this case.
The design goal should always be to make safer systems. Engineering these aircraft systems is not a case of purely fitting together a set of Lego like components. The error made with the MCAS is one that ignored this fact. Interdependencies are manyfold.
Ideally, future aircraft systems, however capable and complex, should be describable, predicable, and ultimately trustworthy. These words sound so simple. One reason this is not simple is that very word – complex. The minute that there’s a massive number of possible combinations and permutations of conditions at may exit boundaries must be set. What’s a little more reassuring is that complexity if far from new in human experience[2].
Just to make the airspace of the future even more complex it’s no longer correct to think of an aircraft as alone and free to make any appropriate manoeuvre. Increasing connectivity, cybersecurity, and artificial intelligence (AI) all come into the mix.
To stay safe, pilots will have to appreciate how constraints and boundaries are managed. This information must be provided transparently and preferable with options.
[1] https://www.iata.org/en/pressroom/opinions/the-safety-paradox-fewer-accidents-greater-responsibility/
[2] https://en.wikipedia.org/wiki/Wheat_and_chessboard_problem