Come on, one software control system is much like another. We don’t want to know what’s inside the box. We just want to know what it does. Well, that’s one point of view. Slowly, year by year, as what’s in the box has becomes more and more complex, or at least difficult to understand, so the opinion expressed above gets more airtime. There’s no doubt, I don’t give a lot of thought to how my iPhone does what it does in the palm of my hand. Whereas 30-years ago, I was intrigued to understand how a symbol generator created characters on an aircraft electronic display.

Levels of interconnection, integration and interoperation create independencies that become harder and harder to see and understand. I suppose we ought to coin a new “inter” word to sum up the high density of functions ticking away behind the curtain of everyday acceptance. The hidden workings of machines that we cannot live without. It’s much more than lines of software code that are transforming our lives. And transforming flying. Today, oceans of algorithmic go on data crunching with a high degree of autonomy. Some of it is transparent to a smart set of specialist technical gurus but most of us, even expert us, sit outside the advancing wave of change.

What I find intriguing is discussion about how society will react when super complex systems go badly wrong. We know something of what happens when conventional systems go wrong. A few minutes studying the recent Boeing 737 MAX saga is a good illustration of what can happen.

It’s a rule in my mind that whatever autonomy a system is given, someone somewhere cannot escape accountability for its actions. Yes, dystopia SiFi stories are full of rouge machines running amok. Society will surely not allow that to happen – will we?

Industry and regulators both have an immensely important role to work together to mange risks. Politicians have a basic responsibility to listen to the conclusions of expert findings. When the amalgam of workings inside the box has such features as machine learning we go way beyond the conversional approach to systems. Beyond what we have been doing successfully to assure safety for the last 30-years.

Demands for greater performance means that we cannot be luddite about the use of non-deterministic systems in safety related control systems. Their adaptability, agility and flexibility can help us meet many environmental and societal aims. But the classical questions of – what if? Still need to be addressed in detail to assure resilience, robustness, and basic levels of safety.

And we must do all this at the same time as updating airborne software of some flying aircraft using floppy disks.

Emerging Safety Issues 2/

There’s no stark dividing line between the criteria that I flashed up in the last few paragraphs. In fact, there will be major aviation projects that bring these all together in a new way. With the gathering pressure to address aviation’s climate impact there’s a strong desire to fly but in radically different ways. Take the blended wing body (BWB) concept[1]. It’s not new to aerodynamics but until recently the concept has remained on drawing boards[2] and in marketing brochures.

It’s likely that the next big adventure in aircraft design will be a shape that has no distinct separation between the fuselage and wings. Such blended structures may have properties that make them much more environmentally friendly. Yet, they will still be able to be operated from relatively conventional airports. If we combine a BWB with high levels of automation and systems integration and throw in hydrogen propulsion for good measure, there are going to be a myriad of emerging safety issues to consider. That’s one for the list.

Electrification is a snowball that’s rolling gathering ever greater speed. Industry has its eyes on high-power fuel cells. Hydrogen-powered fuel cells are a green alternative to combustion engines. They may have few moving parts, but exotic materials and high temperatures present a bucket load of technical challenges it they are to be used at altitude in all weathers. Promising technology may tick many boxes, but can it be made safe? That’s another one for the list.

To fly, and to do it efficiently watch the birds. They have mastered the art of formation flying to harness its advantages. Formation flying may reduce fuel use by minimizing drag. Experimentation with drones flying in formation are being done. However, the use of this way of flying for large transport aircraft is still a research subject[3]. Procedures exits for formation flying for military and general aviation aircraft (aerobatics). What safety issues need attention to make this work for passenger aircraft?

It’s possible to go further for each aircraft in-flight too. The extensive use of artificial intelligence to optimise flight paths has much potential. Since the introduction of Wi-Fi in the cabin, there’s occasions when passengers have better real-time weather information than flight crew. The ability to meet all the collision risk objectives and pick up the most advantageous winds is achievable. Aircraft innovations like tactical trajectory optimisation are great at lowering fuel consumption. Any safety issues emerging from the use of such systems will likely be linked to their level of autonomy.

Integrating autonomous aircraft into controlled airspace is a challenge of today. As we move forward the variety of autonomous aircraft will grow. An application where the commercial marketplace may drive rapid adoption is that of large autonomous cargo freighters. Emphais on the word “large” is appropriate given that it’s 3rd parties that will be at risk in the event of accidents and incidents. The loss of cargo can be insured but how will society react to accidents that may cause fatalities on the ground, if these operations proliferate?




Emerging Safety Issues

Of the 3 approaches to aviation safety the one that depends on expert opinion the most is that of trying to anticipate what’s over the horizon. Reactive safety is strongly supported by the historic data from accidents and incidents. A pro-active approach to safety leans heavily on the data of everyday operations. When it comes to the question of what’s going to emerge as a significant safety issue in the next 10-years then past, or current data may not be the best guide.

Regrettably, several aviation safety issues are as if they were constants. Given the nature of flying, it’s difficult to imagine that the number of Controlled Flight into Terran (CFIT) events will ever reach zero. Similarly, with Loss of Control (LOC) events. These events should continue to diminish worldwide but their elimination is the stuff of dreams.

Flight is always a balance between benefit and risk. There’s no possibility of operation of an aircraft without safety risk. The benefits of flight are wide ranging but often liked to economy and utility. So, in the quest for Emerging Safety Issues (ESIs) we need to consider what new factors might tip the balance between benefit and risk in at least three cases: existing, planned or entirely new or novel aircraft flight operations.

There may be global aviation ESIs needing evaluation related to:

  • the use of aircraft in new ways[1];
  • a new understanding of known phenomena[2];
  • futurists speculations;[3]
  • shifting societal values[4];
  • accelerated adoptions of technology[5].

It’s possible to become overly hypothetical. That’s the point where a reasonable time horizon needs to be drawn. A decade is a good measure in terms of identifying and acting upon an issue. It’s a realistic way of keeping our feet on the ground. If we are considering the safety regulatory world, a decade is a short period of time.

With the above in mind, it’s possible to brainstorm a list of ESIs. Subjects like, urban air mobility, electric and hydrogen propulsion and new materials are good candidates. These could be called large-scale issues since they are wide ranging and self-evidently applicable to aviation. Additionally, there are more murky issues like cybersecurity, quantum computing and blockchain methods that are issues for every part of society. Take your pick.

[1] Example: Higher speeds or altitudes or greatly extended range or traffic density increases

[2] Example: Solar activity, climate change, shifting human factors

[3] Example: New materials, advanced artificial intelligence, new propulsion systems

[4] Example: Risk aversity, liability, service expectations, adventurous sports

[5] Example: Smart phones have changed far more than was envisaged

It can happen

Theories are nice. Having a way of explaining an event or failure, or both is a nice comfort blanket. It can give us a way of trying to look ahead. The common notion that; if it has happened once, it can happen again, is part of our mental hard wiring. We store up memories and are constantly ordering and re-ordering them in our minds. Looking for patterns.

What cuts across is a simple factual recollection of an event. Examples can be illustrative of a theory. Also, they can stand alone as evidence that anyone of us can fall foul of the unthinkable. One of my favourite events, which has the ingredients of the unthinkable happened in the 1990s. It’s about exploration and the space industry. That said, a story on this theme could be written about any part of the aerospace world.

Safety assessments are scoped to consider about anything that’s not extremely improbable. Let’s be clear that’s an approach that consciously asks people to discount some events as absurd or never going to happen, just beyond what we would ever do. The lesson is that when considering how things go wrong it’s as well to be open minded.

Let’s go back to December 1998. A spacecraft called the Mars Climate Orbiter (MCO) was intended to skim the upper atmosphere of the planet and return data to Earth. It had taken over 9 months to get to Mars. A journey like that one come with costs mounting in the tens of millions.

The spacecraft was about to go into orbit, it disappeared behind Mars but failed to re-emerge. Efforts to communicate with it were continued for a long-time but nothing came back. An investigation into the MCO’s loss concluded that it had crashed into the surface of the red planet. This was not the crux of the matter. Such projects have risks that can be unknown.

Investigation concluded that the MCO had been obliterated[1]. It was off course by 60 miles, so it plunged to destruction rather than entering orbit around Mars.

Now, I said that anyone of us can fall foul of the unthinkable. In this situation, that’s what happened. The managing organisation for spacecraft thruster data had been using imperial units. Thruster performance data was in “English” units. NASA’s navigation team had assumed the units used were metric. The trajectory modelers assumed the data was provided in metric units as per their requirements. Thus, the difference between miles and kilometres sealed the fate of the MCO.

Discovering that cause of the loss must have been excruciatingly embarrassing. One of the published recommendations; take steps to improve communication, seems modest. In addition to taking on-board all the investigations findings, my take on this event is two-fold.

  1. Think the unthinkable. Not all the time, but every so often it pays dividends and
  2. Question assumptions. Even the most cherished simple assumptions can be wrong.

These two are universally applicable.