Who’s in control?

The subject of artificial intelligence (AI) in an aircraft cockpit stirs-up reactions that are both passionate and pragmatic. Maybe, it’s a Marmite issue[1]. Mention of the subject triggers an instant judgement. 

Large passenger transport civil aircraft are flown by two human operators. Decisions are made by those two human operators. They are trained and acquire experience doing the job of flying. A word that has its origins in the marine world is used to describe their role – pilot.

One of my roles, early on in my career, was to lead the integration of a cockpit display system into a large new helicopter[2]. New, at the time. The design team, I was part of comprised of people with two different professional backgrounds. One had an engineering background, like me, and the other had qualification associated with psychology. The recognition that an aircraft cockpit is where the human and machine meet is not new. A lot of work was done in simulation with flight crews. 

The first generation of jet aircraft put the pilot in full-time command. It’s as we moved from purely mechanical interactions with aircraft, the balance of flight control has been shared between pilot and aircraft systems. There’s no doubt, in the numbers, that this has improved aviation safety.

Nobody is calling for the removal of aircraft autopilot systems. Much of the role of the formerly required flight engineer has been integrated into the aircraft systems. Information is compressed and summarised on flat screen displays in the aircraft cockpit.

Today, AI is not just one thing. There’s a myriad of different types and configurations, some of which are frozen and some of which are constantly changing as they learn and grow. That said, a flawless machine is a myth. Now, that’s a brave statement. We are generations away from a world where sentient machines produce ever better machines. It’s the stuff of SiFi.

As we have tried to make ever more capable machines, failures are a normal part of evolution. Those cycles of attempts and failures will need to lead into the billions and billions before human capabilities are fully matched. Yes, I know that’s an assertion, but it has taken humans more than a million years to get to have this discussion. That’s with our incredible brains.

What AI can do well is to enhance human capabilities[3]. Let’s say, of all the billions of combinations and permutations, an aircraft in flight can experience, a failure that is not expected, not trained, and not easily understood occurs. This is where the benefits and speed of AI can add a lot. Aircraft system using AI should be able to consider a massive number of potential scenarios and provide a selection of viable options to a flight crew. In time critical events AI can help.

The road where AI replaces a pilot in the cockpit is a dead end. The road where AI helps a pilot in managing a flight is well worth pursuing. Don’t set the goal at replacing humans. Set the goal at maximising the unique qualities of human capabilities.


[1] https://www.macmillandictionary.com/dictionary/british/marmite_2

[2] https://en.wikipedia.org/wiki/AgustaWestland_AW101

[3] https://hbr.org/2021/03/ai-should-augment-human-intelligence-not-replace-it

First Encounter

My first encounter with what could be classed as early Artificial Intelligence (AI) was a Dutch research project. It was around 2007. Let’s first note, a mathematical model isn’t pure AI, but it’s an example of a system that is trained on data.

It almost goes without saying that learning from accidents and incidents is a core part of the process to improve aviation safety. A key industry and regulatory goal is to understand what happened when things go wrong and to prevent a repetition of events.

Civil aviation is an extremely safe mode of transport. That said, because of the size of the global industry there are enough accidents and incidents worldwide to provide useful data on the historic safety record. Despite significant pre-COVID pandemic growth of civil aviation, the number of accidents is so low that further reduction in numbers is providing hard to win.

What if a system was developed that could look at all the historic aviation safety data and make a prediction as to what accidents might happen next?

The first challenge is the word “all” in that compiling such a comprehensive record of global aviation safety is a demanding task. It’s true that comprehensive databases do exist but even within these extremely valuable records there are errors, omissions, and summary information. 

There’s also the kick back that is often associated with record keeping. A system that demands detailed record keeping, of even the most minor incident can be burdensome. Yes, such record keeping has admirable objectives, but the “red tape” wrapped around its objectives can have negative effects.

Looking at past events has only one aim. That’s to now do things to prevent aviation accidents in the future. Once a significant comprehensive database exists then analysis can provide simple indicators that can provide clues as to what might happen next. Even basic mathematics can give us a trend line drawn through a set of key data points[1]. It’s effective but crude.

What if a prediction could take on-board all the global aviation safety data available, with the knowledge of how civil aviation works and mix it in such a way as to provide reliable predictions? This is prognostics. It’s a bit like the Delphi oracle[2]. The aviation “oracle” could be consulted about the state of affairs in respect of aviation safety. Dream? – maybe not.

The acronym CAT normally refers to large commercial air transport (CAT) aeroplanes. What this article is about is a Causal model for Air Transport Safety (CATS)[3]. This research project could be called an early use of “Big Data” in aviation safety work. However, as I understand it, the original aim was to make prognostics a reality.

Using Bayesian network-based causal models it was theorised that a map of aviation safety could be produced. Then it could be possible to predict the direction of travel for the future.

This type of quantification has a lot of merit. It has weaknesses, in that the Human Factor (HF) often defies prediction. However, as AI advances maybe causal modelling ought to be revised. New off-the-shelf tools could be used to look again at the craft of prediction.


[1] https://ec.europa.eu/eurostat/statistics-explained/index.php?title=Air_safety_statistics_in_the_EU

[2] https://www.history.com/topics/ancient-greece/delphi

[3] https://open.overheid.nl/documenten/ronl-archief-d5cd2dc7-c53f-4105-83c8-c1785dcb98c0/pdf

Pause

An open letter has been published[1]. Not for the first time. It asks those working on Artificial Intelligence (AI) to take a deep breath and pause their work. It’s signed by AI experts and interested parties, like Elon Musk. This is a reaction to the competitive race to launch ever more powerful AI[2]. For all technology launches, it’s taking fewer and fewer years to get to a billion users. If the subject was genetic manipulation the case for a cautious step-by-step approach would be easily understood. However, the digital world, and its impact on our society’s organisation isn’t viewed as important as genetics. Genetically Modified (GM) crops got people excited and anxious. An artificially modified social and political landscape doesn’t seem to concern people quite so much. It maybe, the basis for this ambivalence is a false view that we are more in control of one as opposed to the other. It’s more likely this ambivalence stems from a lack of knowledge. One response to the open letter[3] I saw was thus: A lot of fearmongering luddites here! People were making similar comments about the pocket calculator at one time! This is to totally misunderstand what is going on with the rapid advance of AI. I think, the impact on society of the proliferation of AI will be greater than that of the invention of the internet. It will change the way we work, rest and play. It will do it at remarkable speed. We face an unprecedented challenge. I’m not for one moment advocating a regulatory regime that is driven by societal puritans. The open letter is not proposing a ban. What’s needed is a regulatory regime that can moderate aggressive advances so that knowledge can be acquired about the impacts of AI. Yesterday, a government policy was launched in the UK. The problem with saying that there will be no new regulators and regulators will need to act within existing powers is obvious. It’s a diversion of resources away from exiting priorities to address challenging new priorities. That, in of itself is not an original regulatory dilemma. It could be said, that’s why we have sewage pouring into rivers up and down the UK. In an interview, Conservative Minister Paul Scully MP mentioned sandboxing as a means of complying with policy. This is to create a “safe space” to try out a new AI system before launching it on the world. It’s a method of testing and trials that is useful to gain an understanding of conventional complex systems. The reason this is not easily workable for AI is that it’s not possible to build enough confidence that AI will be safe, secure and perform its intended function without running it live. For useful AI systems, even the slightest change in the start-up conditions or training can produce drastically different outcomes. A live AI system can be like shifting sand. It will build up a structure to solve problems, and do it well, but the characteristics of its internal workings will vary significantly from one similar system to another. Thus, the AI system’s workings, as they are run through a sandbox exercise may be unlike the same system’s workings running live. Which leads to the question – what confidence can a regulator, with an approval authority, have in a sandbox version of an AI system? Pause. Count to ten and work out what impacts we must avoid. And how to do it.

Policy & AI

Today, the UK Government published an approach to Artificial Intelligence (AI)[1]. It’s in the form of a white paper. That’s a policy document creäte by the Government that sets out their proposals for future legislation.

This is a big step. Artificial Intelligence (AI) attracts both optimism and pessimism. Utopia and dystopia. There are a lot more people who sit in these opposing camps as there are who sit in the middle. It’s big. Unlike any technology that has been introduce to the whole populous.

On Friday last, I caught the film iRobot (2004)[2] showing early evening on Film 4. It’s difficult to believe this science fiction is nearly 20-years old and the short story of Isaac Asimov’s, on which it’s based is from the 1950s. AI is a fertile space for the imagination to range over a vast space.

Fictional speculation about AI has veered towards the dystopian end of the scale. Although that’s not the whole story by far. One example of good AI is the sentient android in the Star Trek universe. The android “Data” based on the USS Enterprise, strives to help humanity and be more like us. His attempt to understand human emotions are often significant plot points. He’s a useful counterpoint to evil alien intelligent machines that predictably aim to destroy us all.

Where fiction helps is to give an airing to lots of potential scenarios for the future. That’s not trivial. Policy on this rapidly advancing subject should not be narrowly based or dogmatic.

Where there isn’t a great debate is the high-level objectives that society should endeavour to achieve. We want technology to do no harm. We want technology to be trustworthy. We want technology to be understandable.

Yet, we know from experience, that meeting these objectives is much harder than asserting them. Politicians love to assert. In the practical world, it’s public regulators who will have to wrestle with the ambitions of industry, unforeseen outcomes, and negative public reactions.

Using the words “world leading” successively is no substitute for resourcing regulators to beef-up their capabilities when faced with rapid change. Vague and superficial speeches are fine in context. Afterall, there’s a job to be done maintaining public confidence in this revolutionary technology.

What’s evident is that we should not delude ourselves. This technical transformation is unlike any we have so far encountered. It’s radical nature and speed mean that even when Government and industry work together they are still going to be behind the curve.

As a fictional speculation an intelligent android who serves as a senior officer aboard a star ship is old school. Now, I wonder what we would make of an intelligent android standing for election and becoming a Member of Parliament?


[1] The UK’s AI Regulation white paper will be published on Wednesday, 29 March 2023. Organisations and individuals involved in the AI sector will be encouraged to provide feedback on the white paper through a consultation which launches today and will run until Tuesday, 21 June 2023.

[2] https://en.wikipedia.org/wiki/I,_Robot_(film)

Good enough

It’s not a universal rule. What is? There are a million and one ways that both good and bad things can happen in life. A million is way under any genuine calculation. Slight changes in decisions that are made can head us off in a completely different direction. So much fiction is based on this reality.

Yes, I have watched “Everything Everywhere All at Once[1]”. I’m in two minds about my reaction. There’s no doubt that it has an original take on the theory of multiple universes and how they might interact. It surprised me in just how much comedy formed the core of the film. There are moments when the pace of the story left me wondering where on earth is this going? Overall, it is an enjoyable movie and its great to see such originality and imagination.

This strange notion of multiple universes, numbered beyond count, has an appeal but it’s more than a headful. What I mean is that trying to imagine what it looks like, if such a thing is possible, is almost hopeless. What I liked about the movie is that small difference are more probable and large difference are far less probable. So, to get to the worlds that are radically different from where you are it’s necessary to do something extremely improbable.

Anyway, that’s not what I’m writing about this morning. I’ve just been reading a bit about Sir Robert Alexander Watson Watt. The man credited with giving us radar technology.

Perfect is the enemy of good is a dictum that’s has several attributions. It keeps coming up. Some people celebrate those who strive for perfection. However, in human affairs, perfection, is an extremely improbable outcome in most situations. There’s a lot of talent and perspiration needed to jump from average to perfect in any walk of life.

What the dictum above shorthand’s is that throwing massive amounts of effort at a problem can prevent a good outcome. Striving for perfection, faced with our human condition, can be a negative.

That fits well with me. My experience of research, design and development suggested the value of incremental improvement and not waiting for perfect answers to arise from ever more work. It’s the problem with research funding. Every paper calls for more research to be done.

In aviation safety work the Pareto principle is invaluable. It can be explained by a ghastly Americanisms. Namely, let’s address the “low hanging fruit” first. In other words, let’s make the easiest improvements, that produce the biggest differences, first.

I’m right on-board with Robert Watson-Watt and his “cult of the imperfect”. He’s quoted saying: “Give them the third best to go on with; the second best comes too late, the best never comes”. It’s to say do enough of what works now without agonising over all the other possible better ways. Don’t procrastinate (too much).


[1] https://www.imdb.com/title/tt6710474/

Digital toxicity

There’s a tendency to downplay the negative aspects of the digital transition that’s happening at pace. Perhaps it’s the acceptance of the inevitability of change and only hushed voices of objection.

A couple of simple changes struck me this week. One was my bank automatically moving me to an on-line statement and the other was a news story about local authorities removing pay machines from car parks on the assumption everyone has a mobile phone.

With these changes there’s a high likelihood that difficulties are going to be caused for a few people. Clearly, the calculation of the banks and local authorities is that the majority rules. Exclusion isn’t their greatest concern but saving money is high on their list of priorities.

The above aside, my intention was to write about more general toxic impacts of the fast-moving digital transition. Now, please don’t get me wrong. In most situations such a transition has widespread benefits. What’s of concern is the few mitigations for any downsides.

Let’s list a few negatives that may need more attention.

Addiction. With social media this is unquestionable[1]. Afterall digital algorithms are developed to get people engaged and keep them engaged for as long as possible. It’s the business model that brings in advertising revenues. There’s FOMO too. That’s a fear of missing out on something new or novel that others might see but you might miss out on.

Attention. Rapidly stroking a touch screen to move from image to image, or video to video encourages less attention to be given to any one piece of information. What research there is shows a general decline in the attention span[2] as a characteristic of being subject to increasing amounts of information, easily made available.

Adoration. Given that so many digital functions are provided with astonishing accuracy, availability, and speed there’s a natural inclination to trust their output. When that trust is justifiable for a high percentage of the time, the few times information is in error can easily be ignored or missed. This can lead to people defending or supporting information that is wrong[3] or misleading.

It’s reasonable to say there are downsides with any use of technology. That said, it’s as well to try to mitigate those that are known about and understood. The big problem is the cumulative effect of the downsides. This can increase fragility and vulnerability of the systems that we all depend upon.

If digital algorithms were medicines or drugs, there would be a whole array of tests conducted before their public release. Some would be strongly regulated. I’m not saying that’s the way to go but it’s a sobering thought.


[1] https://www.theguardian.com/global/2021/aug/22/how-digital-media-turned-us-all-into-dopamine-addicts-and-what-we-can-do-to-break-the-cycle

[2] https://www.kcl.ac.uk/news/are-attention-spans-really-collapsing-data-shows-uk-public-are-worried-but-also-see-benefits-from-technology

[3] https://www.bbc.co.uk/news/business-56718036

Comms

The long history of data communications between air and ground has had numerous stops and starts. It’s not new to use digital communications while flying around the globe. That said, it has not been cheap, and traditional systems have evolved only slowly. If we think Controller Pilot Data Link Communications (CPDLC)[1] is quite whizzy. It’s not. It belongs to a Windows 95 generation. Clunky messages and limited applications.

The sluggishness of adoption of digital communications in commercial aviation has been for several reasons. For one, standardised, certified, and maintainable systems and equipment have been expensive. It’s not just the purchase and installation but the connection charges that mount-up.

Unsurprisingly, aircraft operators have moved cautiously unless they can identify an income stream to be developed from airborne communication. That’s one reason why the passengers accessing the internet from their seats can have better connections than the two-crew in the cockpit.

Larger nations’ military flyers don’t have a problem spending money on airborne networking. For them it’s an integral part of being able to operate effectively. In the civil world, each part of the aviation system must make an economic contribution or be essential to safety to make the cut.

The regulatory material applicable to Airborne Communications, Navigation and Surveillance (CS-ACNS)[2] can be found in publications coming from the aviation authorities. This material has the purpose of ensuring a high level of safety and aircraft interoperability. Much of this generally applicable material has evolved slowly over the last 30-years.

Now, it’s good to ask – is this collection of legacy aviation system going to be changed by the new technologies that are rapidly coming on-stream this year? Or are the current mandatory equipage requirements likely to stay the same but be greatly enhanced by cheaper, faster, and lower latency digital connections?

This year, Starlink[3] is offering high-speed, in-flight internet connections with global connectivity. This company is not the only one developing Low Earth Orbit (LEO)[4] satellite communications. There are technical questions to be asked in respect of safety, performance, and interoperability but it’s a good bet that these new services will very capable and what’s more, not so expensive[5].

It’s time for airborne communications to step into the internet age.

NOTE: The author was a part of the EUROCAE/RTCA Special Committee 169 that created Minimum Operational Performance Standards for ATC Two-Way Data Link Communications back in the 1990s.

POST 1: Elon Musk’s Starlink Internet Service Coming to US Airlines; Free WiFi (businessinsider.com)

POST 2: With the mandate of VDLM2 we evolve at the pace of a snail. Internet Protocol (IP) Data Link may not be suitable for all uses but there’s a lot more that can be done.


[1] https://skybrary.aero/articles/controller-pilot-data-link-communications-cpdlc

[2] https://www.easa.europa.eu/en/document-library/easy-access-rules/easy-access-rules-airborne-communications-navigation-and

[3] https://www.starlink.com/

[4] https://www.esa.int/ESA_Multimedia/Images/2020/03/Low_Earth_orbit

[5] https://arstechnica.com/information-technology/2022/10/starlink-unveils-airplane-service-musk-says-its-like-using-internet-at-home/

Small Boats

Are there really hundred million people coming to Britain? Or is this a desperate scare tactic adopted by a Conservative Minister who has run out of workable ideas? It’s certainly the sort of tabloid headline that a lot of conservative supporters like to read. As we saw in the US, with former President Trump’s rhetoric on building a wall these themes stir-up negative emotions and prejudice. It’s a way of dividing people.

Xenophobia is defined as a fear and hatred of strangers or foreigners or of anything that is strange or foreign. With nearly 8 billion people on Earth[1] the potential for this destructive fear to be exploited has never been greater. Here, the Conservative Party is increasingly dominated by xenophobia and demagoguery, whatever a change of leadership may be trying to cover-up.

Will Parliamentary debate save us from the worst instincts highlighted in the Government’s latest proposals on small boat crossings? That’s a big question when the ruling political party has such a large parliamentary majority. Debate is likly to be heated and lacking objectivity.

Pushing the boundaries of international law can cause reputational damage, even if these rum proposals are defeated. However, what concerns most commentators is the high likelihood that the proposed measure will not work. They are merely a more extreme version of past failed policies.

One of the poorest political arguments is to criticise an opponent for reasoned opposition. It goes like this: here’s my policy and by opposing, it without providing your policy, you automatically make my policy a good one. It’s like planning to build a dangerously rickety bridge, likely to fail, and pointing to those who criticise the project as a reason why it’s a good to project.

When spelt out, like this it’s clear how curiously subversive this shoddy bombast can be. However, one of the basic party-political instincts, to seek headlines and publicity, has overridden common sense in this case. In the Government’s case, legislating regardless of the consequences, is an act of political desperation. Sadly, that’s where we are in this pre-election period.

NOTE: In June 2022, the UK had a prison population of roughly 89,520 people. The detention facilities needed to enable the Government’s small boats policy would need to be in the region of 40,000 people. Yet, there’s no published plan for a significant expansion of detention facilities. 


[1] https://www.census.gov/popclock/world

Just H

What is the future of Hydrogen in Aviation? Good question. Every futurologist has a place for Hydrogen (H) in their predictions. However, the range of optimistic projections is almost matched by the number of pessimistic ones.

There’s no doubt that aircraft propulsion generated using H as a fuel can be done. There’s a variety of way of doing it but, the fact is, that it can be done. What’s less clear is a whole mass of factors related to economics, safety and security and desirability of having a hydrogen-based society.

H can be a clean form of energy[1], as in its purest form the process of combustion produces only water. We need to note that combustion processes are rarely completely pure.

It’s an abundant element but it prefers to be in company of other elements. Afterall, the planet is awash with H2O. When H is on its own it has no colour, odour, or taste. In low concentrations, we humans could be oblivious to it even though there’s a lot of it in the compounds that make us up.

Number one on the periodic table, it’s a tiny lightweight element that can find all sorts of ways of migrating from A to B. Ironically, that makes it an expensive element to move around in commercially useable quantities. H is often produced far away from where it’s used. For users like aviation, this makes the subject of distribution a fundamental one.

Part of the challenge of moving H around is finding ways of increasing its energy density. So, making it liquid or pumping it as a high-pressure gas are the most economic ways of using it. If this is to be done with a high level of safety and security, then this is not going to come cheap.

There are a lot of pictures of what happens when this goes wrong.  Looking back at the airships of the past there are numerous catastrophic events to reference. More relevantly, there’s the space industry to look at for spectacular failures[2]. A flammable hydrogen–air mixture doesn’t take much to set it off[3]. The upside is that H doesn’t hang around. Compared to other fuels H is likely to disperse quickly. It will not pool on the ground like Kerosene does.

In aviation super strict control procedure and maintenance requirements will certainly be needed. Every joint and connectors will need scrupulous attention. Every physical space where gas can accumulate will need a detection system and/or a fail proof vent.

This is a big new challenge to aircraft airworthiness. The trick is to learn from other industries.

NOTE: The picture. At 13:45 on 1 December 1783, Professor Jacques Charles and the Robert brothers launched a manned balloon in Paris. First manned hydrogen balloon flight was 240 years ago.


[1] https://knowledge.energyinst.org/collections/hydrogen

[2] https://appel.nasa.gov/2011/02/02/explosive-lessons-in-hydrogen-safety/

 

To provoke

Social media provocateurs are on the rise. Say something that’s a bit on the edge and wait for the avalanche of responses. It’s a way of getting traffic to a site. The scientific and technical sphere has these digital provocateurs less than the glossy magazine brigade, but the phenomena is growing.

Take a method or technique that is commonly used, challenge people to say why it’s good while branding it rubbish. It’s not a bad way to get clicks. This approach to the on-line world stimulates several typical responses.

One: Jump on-board. I agree the method is rubbish. Two: I’m a believer. You’re wrong and here’s why. Three: So, what? I’m going to argue for the sake of arguing. Four: Classical fence sitting. On the one hand you maybe right on the other hand you may be wrong.

Here’s one I saw recently about safety management[1]. You know those five-by-five risk matrices we use – they’re rubbish. They are subjective and unscientific. They give consultants the opportunity to escalate risks to make new work or they give managers the opportunity to deescalate risk to avoid doing more work. Now, that’s not a bad provocation. 

If the author starts by alleging all consultants and managers of being manipulative bad actors that sure is going to provoke a response. In safety management there are four pillars and one of them is safety culture. So, if there are manipulative bad actors applying the process there’s surely a poor safety culture which makes everything else moot.

This plays into the discomfort some people have with the inevitable subjectivity of risk classification. It’s true that safety risk classification uses quantitative and qualitative methods. However, most typically quantitative methods are used to support qualitative decisions.

There’s an in-built complication with any risk classification scheme. It’s one reason why three-by-three risk matrices are often inadequate. When boundaries are set there’s always the cases to decide for items that are marginally one side or other side of a prescribed line.

An assessment of safety risk is just that – an assessment. When we use the word “analysis” it’s the supporting work that is being referenced. Even an analysis contains estimations of the risk. This is particularly the case in calculations involving any kind of human action.

To say that this approach is not “scientific” is again a provocation. Science is far more than measuring phenomena. Far more than crunching numbers. It includes the judgement of experts. Yes, that judgement must be open to question. Testing and challenging is a good way of giving increased the credibility of conclusions drawn from risk assessment.


[1] https://publicapps.caa.co.uk/docs/33/CAP795_SMS_guidance_to_organisations.pdf