Engineering

I know this is not a new issue to raise but it is enduring. Years go by and nothing much changes. One of the reasons that “engineering” is poorly represented in the UK is that its voice is fragmented.

I could do a simple vox pop. Knock on a random door and ask – who speaks for engineers in the UK. The likelihood is that few would give an answer, let alone name an organisation. If I asked who speaks for doctors, those in the know would say the BMA[1]. If I asked who speaks for lawyers, most would answer the law society[2]. I dare not ask who represents accountants.

Professional engineering institution have an important role. That’s nice and easy to say, in-fact all the ones that are extant do say so. Supporting professional development is key to increasing access to engineering jobs. It’s spokespersons, specialist groups and networking opportunities can provide visibility of the opportunities in the profession.

So, why are there so many different voices? There’s a great deal of legacy. An inheritance from bygone eras. I see lots of overlap in the aviation and aerospace industries. There’re invitations in my in-box to events driven by IET[3], IMECHE, Royal Aero Society and various manufacturing, software, safety, and reliability organisations.

The variety of activities may serve specialist niches, but the overall effect is to dilute the impact the engineering community has on our society. Ever present change means that new specialist activities are arising all the time. It’s better to adapt and include these within existing technical institutions rather than invent new ones.

What’s the solution? There have been amalgamations in the past. Certainly, where there are significant overlaps between organisations then amalgamation maybe the best way forward.

There’s the case for sharing facilities. Having separate multiple technical libraries seems strange in the age of the connected device. Even sharing buildings needs to be explored.

Joint activities do happen but not to the extent that could fully exploit the opportunities that exits.

If the UK wishes to increase the number of competent engineers, it’s got to re-think the proliferation of different institutions, societies, associations, groupings, and licencing bodies.  

To elevate the professional status of engineering in our society we need organisations that have the scale and range to communicate and represent at all levels. Having said the above, I’m not hopeful of change. Too many vested interests are wedded to the status-quo. We have both the benefits of our Victorian past and the milestone of that grand legacy. 


[1] https://www.bma.org.uk/

[2] https://www.lawsociety.org.uk/en

[3] http://www.theiet.org/

Experts

The rate of increase in the power of artificial intelligence (AI) is matched by the rate of increase in the number of “experts” in the field. I’ve heard that jokingly said. 5-minutes on Twitter and it’s immediately apparent that off-the-shelf opinions run from – what’s all the fuss about? to Armageddon is just around the corner.

Being a bit of a stoic[1], I take the view that opinions are fine, but the question is what’s the reality? That doesn’t mean ignoring honest speculation, but that speculation should have some foundation in what’s known to be true. There’s plenty of emotive opinions that are wonderfully imaginative. Problem is that it doesn’t help us take the best steps forward when faced with monumental changes.

Today’s report is of the retirement of Dr Geoffrey Hinton from Google. Now, there’s a body of experience in working with AI. He warns that the technology is heading towards a state where it’s far more “intelligent” than humans. He’s raised the issue of “bad actors” using AI to the detriment of us all. These seem to me valid concerns from an experienced practitioner.

For decades, the prospect of a hive mind has peppered science fiction stories with tales of catastrophe. With good reason given that mind-to-mind interconnection is something that humans haven’t mastered. This is likely to be the highest risk and potential benefit. If machine learning can gain knowledge at phenomenal speeds from a vast diversity of sources, it becomes difficult to challenge. It’s not that AI will exhibit wisdom. It’s that its acquired information will give it the capability to develop, promote and sustain almost any opinion.

Let’s say the “bad actor” is a colourful politician of limited competence with a massive ego and ambition beyond reason. Sitting alongside, AI that can conjure-up brilliant speeches and strategies for beating opponents and that character can become dangerous.

So, to talk about AI as the most important inflection point in generations is not hype. In that respect the rapid progress of AI is like the invention of the explosive dynamite[2]. It changed the world in both positive and negative ways. Around the world countries have explosives laws and require licenses to manufacture, distribute, store, use, and possess explosives or its ingredients.

So far, mention of the regulation of AI makes people in power shudder. Some lawmakers are bigging-up a “light-touch” approach. Others are hunched over a table trying to put together threads of a regulatory regime[3] that will accentuate the positive and eliminate the negative[4].


[1] https://dailystoic.com/what-is-stoicism-a-definition-3-stoic-exercises-to-get-you-started/

[2] https://en.wikipedia.org/wiki/Dynamite

[3] https://digital-strategy.ec.europa.eu/en/policies/european-approach-artificial-intelligence

[4] https://youtu.be/JS_QoRdRD7k

AI awakens

Artificial Intelligence (AI)[1] is with us. Give it a question and it will answer you. Do it many times, with access to many information sources and it will improve its answer to you. That seems like a computer that can act like a human. In everyday reality, AI mimics a small number of the tasks that “intelligent” humans can do and do with little effort.

AI has a future. It could be immensely useful to humanity. As with other revolutions, it could take the drudgery out of administrative tasks, simple research, and well characterised human activities. One reaction to this is to joke that – I like the drudgery. Certainly, there’s work that could be classified as better done by machine but there’s pleasure to be had in doing that work.

AI will transform many industries but will it ever wake-up[2].  Will it ever become conscious.

A machine acting human is not the same as it becoming conscious. AI mimicking humans can give the appearance of being self-aware but it’s not. Digging deep inside the mechanism it remains a computational machine that knows nothing of its own existence.

We don’t know what it is that can give rise to consciousness. It’s a mystery how it happens within our own brains. It’s not a simple matter. It’s not magic either but it is a product of millions of years of evolution.

Humans learn from our senses. A vast quantity of experiences over millennia have shaped us. Not by our own choosing but by chance and circumstances. Fortunately, a degree of planetary stability has aided this growth from simple life to the complex creatures we are now.

One proposition is that complexity and conscious are linked. That is that conscious in a machine may arise from billions and billions of connections and experiences. It’s an emergent behaviour that arises at some unknown threshold. As such this proposition leaves us with a major dilemma. What if we inadvertently create conscious AI? What do we do at that moment?

Will it be an accidental event? There are far more questions than answers. No wonder there’s a call for more research[3].


[1] https://www.bbc.co.uk/newsround/49274918

[2] https://www2.deloitte.com/us/en/pages/consulting/articles/the-future-of-ai.html

[3] https://www.bbc.co.uk/news/technology-65401783.amp

Who’s in control?

The subject of artificial intelligence (AI) in an aircraft cockpit stirs-up reactions that are both passionate and pragmatic. Maybe, it’s a Marmite issue[1]. Mention of the subject triggers an instant judgement. 

Large passenger transport civil aircraft are flown by two human operators. Decisions are made by those two human operators. They are trained and acquire experience doing the job of flying. A word that has its origins in the marine world is used to describe their role – pilot.

One of my roles, early on in my career, was to lead the integration of a cockpit display system into a large new helicopter[2]. New, at the time. The design team, I was part of comprised of people with two different professional backgrounds. One had an engineering background, like me, and the other had qualification associated with psychology. The recognition that an aircraft cockpit is where the human and machine meet is not new. A lot of work was done in simulation with flight crews. 

The first generation of jet aircraft put the pilot in full-time command. It’s as we moved from purely mechanical interactions with aircraft, the balance of flight control has been shared between pilot and aircraft systems. There’s no doubt, in the numbers, that this has improved aviation safety.

Nobody is calling for the removal of aircraft autopilot systems. Much of the role of the formerly required flight engineer has been integrated into the aircraft systems. Information is compressed and summarised on flat screen displays in the aircraft cockpit.

Today, AI is not just one thing. There’s a myriad of different types and configurations, some of which are frozen and some of which are constantly changing as they learn and grow. That said, a flawless machine is a myth. Now, that’s a brave statement. We are generations away from a world where sentient machines produce ever better machines. It’s the stuff of SiFi.

As we have tried to make ever more capable machines, failures are a normal part of evolution. Those cycles of attempts and failures will need to lead into the billions and billions before human capabilities are fully matched. Yes, I know that’s an assertion, but it has taken humans more than a million years to get to have this discussion. That’s with our incredible brains.

What AI can do well is to enhance human capabilities[3]. Let’s say, of all the billions of combinations and permutations, an aircraft in flight can experience, a failure that is not expected, not trained, and not easily understood occurs. This is where the benefits and speed of AI can add a lot. Aircraft system using AI should be able to consider a massive number of potential scenarios and provide a selection of viable options to a flight crew. In time critical events AI can help.

The road where AI replaces a pilot in the cockpit is a dead end. The road where AI helps a pilot in managing a flight is well worth pursuing. Don’t set the goal at replacing humans. Set the goal at maximising the unique qualities of human capabilities.


[1] https://www.macmillandictionary.com/dictionary/british/marmite_2

[2] https://en.wikipedia.org/wiki/AgustaWestland_AW101

[3] https://hbr.org/2021/03/ai-should-augment-human-intelligence-not-replace-it

First Encounter

My first encounter with what could be classed as early Artificial Intelligence (AI) was a Dutch research project. It was around 2007. Let’s first note, a mathematical model isn’t pure AI, but it’s an example of a system that is trained on data.

It almost goes without saying that learning from accidents and incidents is a core part of the process to improve aviation safety. A key industry and regulatory goal is to understand what happened when things go wrong and to prevent a repetition of events.

Civil aviation is an extremely safe mode of transport. That said, because of the size of the global industry there are enough accidents and incidents worldwide to provide useful data on the historic safety record. Despite significant pre-COVID pandemic growth of civil aviation, the number of accidents is so low that further reduction in numbers is providing hard to win.

What if a system was developed that could look at all the historic aviation safety data and make a prediction as to what accidents might happen next?

The first challenge is the word “all” in that compiling such a comprehensive record of global aviation safety is a demanding task. It’s true that comprehensive databases do exist but even within these extremely valuable records there are errors, omissions, and summary information. 

There’s also the kick back that is often associated with record keeping. A system that demands detailed record keeping, of even the most minor incident can be burdensome. Yes, such record keeping has admirable objectives, but the “red tape” wrapped around its objectives can have negative effects.

Looking at past events has only one aim. That’s to now do things to prevent aviation accidents in the future. Once a significant comprehensive database exists then analysis can provide simple indicators that can provide clues as to what might happen next. Even basic mathematics can give us a trend line drawn through a set of key data points[1]. It’s effective but crude.

What if a prediction could take on-board all the global aviation safety data available, with the knowledge of how civil aviation works and mix it in such a way as to provide reliable predictions? This is prognostics. It’s a bit like the Delphi oracle[2]. The aviation “oracle” could be consulted about the state of affairs in respect of aviation safety. Dream? – maybe not.

The acronym CAT normally refers to large commercial air transport (CAT) aeroplanes. What this article is about is a Causal model for Air Transport Safety (CATS)[3]. This research project could be called an early use of “Big Data” in aviation safety work. However, as I understand it, the original aim was to make prognostics a reality.

Using Bayesian network-based causal models it was theorised that a map of aviation safety could be produced. Then it could be possible to predict the direction of travel for the future.

This type of quantification has a lot of merit. It has weaknesses, in that the Human Factor (HF) often defies prediction. However, as AI advances maybe causal modelling ought to be revised. New off-the-shelf tools could be used to look again at the craft of prediction.


[1] https://ec.europa.eu/eurostat/statistics-explained/index.php?title=Air_safety_statistics_in_the_EU

[2] https://www.history.com/topics/ancient-greece/delphi

[3] https://open.overheid.nl/documenten/ronl-archief-d5cd2dc7-c53f-4105-83c8-c1785dcb98c0/pdf

Pause

An open letter has been published[1]. Not for the first time. It asks those working on Artificial Intelligence (AI) to take a deep breath and pause their work. It’s signed by AI experts and interested parties, like Elon Musk. This is a reaction to the competitive race to launch ever more powerful AI[2]. For all technology launches, it’s taking fewer and fewer years to get to a billion users. If the subject was genetic manipulation the case for a cautious step-by-step approach would be easily understood. However, the digital world, and its impact on our society’s organisation isn’t viewed as important as genetics. Genetically Modified (GM) crops got people excited and anxious. An artificially modified social and political landscape doesn’t seem to concern people quite so much. It maybe, the basis for this ambivalence is a false view that we are more in control of one as opposed to the other. It’s more likely this ambivalence stems from a lack of knowledge. One response to the open letter[3] I saw was thus: A lot of fearmongering luddites here! People were making similar comments about the pocket calculator at one time! This is to totally misunderstand what is going on with the rapid advance of AI. I think, the impact on society of the proliferation of AI will be greater than that of the invention of the internet. It will change the way we work, rest and play. It will do it at remarkable speed. We face an unprecedented challenge. I’m not for one moment advocating a regulatory regime that is driven by societal puritans. The open letter is not proposing a ban. What’s needed is a regulatory regime that can moderate aggressive advances so that knowledge can be acquired about the impacts of AI. Yesterday, a government policy was launched in the UK. The problem with saying that there will be no new regulators and regulators will need to act within existing powers is obvious. It’s a diversion of resources away from exiting priorities to address challenging new priorities. That, in of itself is not an original regulatory dilemma. It could be said, that’s why we have sewage pouring into rivers up and down the UK. In an interview, Conservative Minister Paul Scully MP mentioned sandboxing as a means of complying with policy. This is to create a “safe space” to try out a new AI system before launching it on the world. It’s a method of testing and trials that is useful to gain an understanding of conventional complex systems. The reason this is not easily workable for AI is that it’s not possible to build enough confidence that AI will be safe, secure and perform its intended function without running it live. For useful AI systems, even the slightest change in the start-up conditions or training can produce drastically different outcomes. A live AI system can be like shifting sand. It will build up a structure to solve problems, and do it well, but the characteristics of its internal workings will vary significantly from one similar system to another. Thus, the AI system’s workings, as they are run through a sandbox exercise may be unlike the same system’s workings running live. Which leads to the question – what confidence can a regulator, with an approval authority, have in a sandbox version of an AI system? Pause. Count to ten and work out what impacts we must avoid. And how to do it.

Good enough

It’s not a universal rule. What is? There are a million and one ways that both good and bad things can happen in life. A million is way under any genuine calculation. Slight changes in decisions that are made can head us off in a completely different direction. So much fiction is based on this reality.

Yes, I have watched “Everything Everywhere All at Once[1]”. I’m in two minds about my reaction. There’s no doubt that it has an original take on the theory of multiple universes and how they might interact. It surprised me in just how much comedy formed the core of the film. There are moments when the pace of the story left me wondering where on earth is this going? Overall, it is an enjoyable movie and its great to see such originality and imagination.

This strange notion of multiple universes, numbered beyond count, has an appeal but it’s more than a headful. What I mean is that trying to imagine what it looks like, if such a thing is possible, is almost hopeless. What I liked about the movie is that small difference are more probable and large difference are far less probable. So, to get to the worlds that are radically different from where you are it’s necessary to do something extremely improbable.

Anyway, that’s not what I’m writing about this morning. I’ve just been reading a bit about Sir Robert Alexander Watson Watt. The man credited with giving us radar technology.

Perfect is the enemy of good is a dictum that’s has several attributions. It keeps coming up. Some people celebrate those who strive for perfection. However, in human affairs, perfection, is an extremely improbable outcome in most situations. There’s a lot of talent and perspiration needed to jump from average to perfect in any walk of life.

What the dictum above shorthand’s is that throwing massive amounts of effort at a problem can prevent a good outcome. Striving for perfection, faced with our human condition, can be a negative.

That fits well with me. My experience of research, design and development suggested the value of incremental improvement and not waiting for perfect answers to arise from ever more work. It’s the problem with research funding. Every paper calls for more research to be done.

In aviation safety work the Pareto principle is invaluable. It can be explained by a ghastly Americanisms. Namely, let’s address the “low hanging fruit” first. In other words, let’s make the easiest improvements, that produce the biggest differences, first.

I’m right on-board with Robert Watson-Watt and his “cult of the imperfect”. He’s quoted saying: “Give them the third best to go on with; the second best comes too late, the best never comes”. It’s to say do enough of what works now without agonising over all the other possible better ways. Don’t procrastinate (too much).


[1] https://www.imdb.com/title/tt6710474/

Radio on the hill

We take radio for granted. I’m listening to it, now. That magic of information transferred through the “ether[1]” at the speed of light and without wires. This mystery was unravelled first in the 19th century. Experimentation and mathematics provided insights into electromagnetics.

The practical applications of radio waves were soon recognised. The possibility of fast information transfer between A and B had implications for the communications and the battlefield.

It’s unfortunate to say that warfare often causes science to advance rapidly. The urgency to understand more is driven by strong needs. That phrase “needs must” comes to mind. We experienced this during the COVID pandemic. Science accelerated to meet the challenge.

It wasn’t until after he failed as an artist that Samuel Morse transformed communications by inventing the telegraph with his dots and dashes. There’s a telegraph gallery with a reproductions of Morse’s early equipment at the Locust Grove Estate[2] in Poughkeepsie. I’d recommend it.

The electromagnetic telegraph used wires to connect A and B. Clearly, that’s not useful if the aim is to connect an aircraft with the ground.

The imperative to make air-ground communication possible came from the first world war. Aviation’s role in warfare came to the fore. Not just in surveillance of the enemy but offensive actions too. Experimentation with airborne radio involved heavy batteries and early spark transmitters. Making such crude equipment usable was an immense challenge. 

Why am I writing about this subject? This week, on a whim I visited the museum at Biggen Hill. The Biggin Hill Museum[3] tells the story the pivotal role played by the fighter station in the second world war. The lesser-known story is the origins of the station.

It’s one of Britain’s oldest aerodromes and sits high up on the hills south of London. Biggin Hill is one of the highest points in that area, rising to over 210 metres (690 ft) above sea level. 

It’s transformation from agricultural fields to a research station (south camp) took place in 1916 and 1917. Its purpose was to explore the scientific and technical innovations of that time. Wireless in particular.  141 Squadron of the Royal Flying Corps (RFC) was based at Biggin Hill and equipped with Bristol Fighters.[9] RFC were the first to take use of wireless telegraphy to assist with artillery targeting.

These were the years before the Royal Air Force (RAF) was formed.

100 years later, in early 2019, the Biggin Hill Museum opened its doors to the public. It’s a small museum but well worth a visit. I found the stories of the early development of airborne radio communications fascinating. So much we take for granted had to be invented, tested, and developed from the most elemental components.

POST 1: Now, I wish I’d be able to attand this lecture – Isle of Wight Branch: The Development of Airborne Wireless for the R.F.C. (aerosociety.com)

POST 2: The bigger story marconiheritage.org/ww1-air.html


[1] https://www.britannica.com/science/ether-theoretical-substance

[2] https://www.lgny.org/home

[3] https://bigginhillmuseum.com/

Digital toxicity

There’s a tendency to downplay the negative aspects of the digital transition that’s happening at pace. Perhaps it’s the acceptance of the inevitability of change and only hushed voices of objection.

A couple of simple changes struck me this week. One was my bank automatically moving me to an on-line statement and the other was a news story about local authorities removing pay machines from car parks on the assumption everyone has a mobile phone.

With these changes there’s a high likelihood that difficulties are going to be caused for a few people. Clearly, the calculation of the banks and local authorities is that the majority rules. Exclusion isn’t their greatest concern but saving money is high on their list of priorities.

The above aside, my intention was to write about more general toxic impacts of the fast-moving digital transition. Now, please don’t get me wrong. In most situations such a transition has widespread benefits. What’s of concern is the few mitigations for any downsides.

Let’s list a few negatives that may need more attention.

Addiction. With social media this is unquestionable[1]. Afterall digital algorithms are developed to get people engaged and keep them engaged for as long as possible. It’s the business model that brings in advertising revenues. There’s FOMO too. That’s a fear of missing out on something new or novel that others might see but you might miss out on.

Attention. Rapidly stroking a touch screen to move from image to image, or video to video encourages less attention to be given to any one piece of information. What research there is shows a general decline in the attention span[2] as a characteristic of being subject to increasing amounts of information, easily made available.

Adoration. Given that so many digital functions are provided with astonishing accuracy, availability, and speed there’s a natural inclination to trust their output. When that trust is justifiable for a high percentage of the time, the few times information is in error can easily be ignored or missed. This can lead to people defending or supporting information that is wrong[3] or misleading.

It’s reasonable to say there are downsides with any use of technology. That said, it’s as well to try to mitigate those that are known about and understood. The big problem is the cumulative effect of the downsides. This can increase fragility and vulnerability of the systems that we all depend upon.

If digital algorithms were medicines or drugs, there would be a whole array of tests conducted before their public release. Some would be strongly regulated. I’m not saying that’s the way to go but it’s a sobering thought.


[1] https://www.theguardian.com/global/2021/aug/22/how-digital-media-turned-us-all-into-dopamine-addicts-and-what-we-can-do-to-break-the-cycle

[2] https://www.kcl.ac.uk/news/are-attention-spans-really-collapsing-data-shows-uk-public-are-worried-but-also-see-benefits-from-technology

[3] https://www.bbc.co.uk/news/business-56718036

Comms

The long history of data communications between air and ground has had numerous stops and starts. It’s not new to use digital communications while flying around the globe. That said, it has not been cheap, and traditional systems have evolved only slowly. If we think Controller Pilot Data Link Communications (CPDLC)[1] is quite whizzy. It’s not. It belongs to a Windows 95 generation. Clunky messages and limited applications.

The sluggishness of adoption of digital communications in commercial aviation has been for several reasons. For one, standardised, certified, and maintainable systems and equipment have been expensive. It’s not just the purchase and installation but the connection charges that mount-up.

Unsurprisingly, aircraft operators have moved cautiously unless they can identify an income stream to be developed from airborne communication. That’s one reason why the passengers accessing the internet from their seats can have better connections than the two-crew in the cockpit.

Larger nations’ military flyers don’t have a problem spending money on airborne networking. For them it’s an integral part of being able to operate effectively. In the civil world, each part of the aviation system must make an economic contribution or be essential to safety to make the cut.

The regulatory material applicable to Airborne Communications, Navigation and Surveillance (CS-ACNS)[2] can be found in publications coming from the aviation authorities. This material has the purpose of ensuring a high level of safety and aircraft interoperability. Much of this generally applicable material has evolved slowly over the last 30-years.

Now, it’s good to ask – is this collection of legacy aviation system going to be changed by the new technologies that are rapidly coming on-stream this year? Or are the current mandatory equipage requirements likely to stay the same but be greatly enhanced by cheaper, faster, and lower latency digital connections?

This year, Starlink[3] is offering high-speed, in-flight internet connections with global connectivity. This company is not the only one developing Low Earth Orbit (LEO)[4] satellite communications. There are technical questions to be asked in respect of safety, performance, and interoperability but it’s a good bet that these new services will very capable and what’s more, not so expensive[5].

It’s time for airborne communications to step into the internet age.

NOTE: The author was a part of the EUROCAE/RTCA Special Committee 169 that created Minimum Operational Performance Standards for ATC Two-Way Data Link Communications back in the 1990s.

POST 1: Elon Musk’s Starlink Internet Service Coming to US Airlines; Free WiFi (businessinsider.com)

POST 2: With the mandate of VDLM2 we evolve at the pace of a snail. Internet Protocol (IP) Data Link may not be suitable for all uses but there’s a lot more that can be done.


[1] https://skybrary.aero/articles/controller-pilot-data-link-communications-cpdlc

[2] https://www.easa.europa.eu/en/document-library/easy-access-rules/easy-access-rules-airborne-communications-navigation-and

[3] https://www.starlink.com/

[4] https://www.esa.int/ESA_Multimedia/Images/2020/03/Low_Earth_orbit

[5] https://arstechnica.com/information-technology/2022/10/starlink-unveils-airplane-service-musk-says-its-like-using-internet-at-home/