Fake/Real?

So, why might artificial intelligence (AI) be so dangerous in a free society?

Democracy depends upon information being available to voters. Ideally, this would be legal, decent, and honest information. All too often the letter of the law may be followed whilst shaping a message to maximise its appeal to potential supporters. Is it honest to leave out chunks of embarrassing information for the one nugget that makes a politician look good? We make our own judgement on that one. We make a judgement assuming that outright lying is a rare case.

During key elections news can travel fast and seemingly small events can be telescoped into major debacles. I’m reminded of the remark made by Prime Minister Gordon Brown[1] when he thought the media’s microphones were dead. In 2010, when an aide asked: What did she say? Gordon Brown was candid in his reply. It’s an occasion when the honest thoughts of a PM on the campaign trail popped into the public domain and livened up that election coverage considerably.

What’s concerning about AI[2] is that, in the hands of a “bad actor,” such events could be faked[3] extremely convincingly. Since the fast pace of election campaigning leaves never enough time for in-depth technical investigations there’s a chance that fake events can sway people before they are uncovered. The time between occurrence and discovery need only be a few days. Deep fakes are moving from amateur student pranks to the tools of propagandists.

Misinformation happens now, you might say. Well, yes it does, and we do need people to fact-check claims and counter claims on a regular basis. However, we still depend on simple techniques, like a reporter or member of the public asking a question. It’s rather a basic in format.

This leaves the door open for AI to be used to produce compelling fakes. Sometimes, all it needs is to inject or eliminate one word from a recording or live event. The accuracy and speed of complex algorithms to provide seamless continuity is new. It can be said that we are a cynical lot. For all the protest of fakery that a politician may make after an exposure there will be a plenty of people who will not accept any subsequent debunking.

My example is but a simple one. There’s a whole plethora of possibilities when convincing fake pictures, audio and videos are only a couple of keyboard stokes away.

Regulatory intervention by lawmakers may not be easy but it does need some attention. In terms of printed media, that is election leaflets there are strict rules. Same with party political broadcasts.

Being realistic about the risks posed by technology is not to shut it down altogether. No, let’s accept that it will become part of our lives. At the same time, using that technology for corrupt purposes obviously needs to be stamped on. Regulatory intervention is a useful way of addressing heightened risks. Some of our 19th century assumptions about democracy need a shake-up. 


[1] https://www.independent.co.uk/news/uk/politics/bigotgate-gordon-brown-anniversary-gillian-duffy-transcript-full-read-1957274.html

[2] https://edition.cnn.com/2023/05/16/tech/sam-altman-openai-congress/index.html

[3] https://www.nytimes.com/2023/02/07/technology/artificial-intelligence-training-deepfake.html

Engineering

I know this is not a new issue to raise but it is enduring. Years go by and nothing much changes. One of the reasons that “engineering” is poorly represented in the UK is that its voice is fragmented.

I could do a simple vox pop. Knock on a random door and ask – who speaks for engineers in the UK. The likelihood is that few would give an answer, let alone name an organisation. If I asked who speaks for doctors, those in the know would say the BMA[1]. If I asked who speaks for lawyers, most would answer the law society[2]. I dare not ask who represents accountants.

Professional engineering institution have an important role. That’s nice and easy to say, in-fact all the ones that are extant do say so. Supporting professional development is key to increasing access to engineering jobs. It’s spokespersons, specialist groups and networking opportunities can provide visibility of the opportunities in the profession.

So, why are there so many different voices? There’s a great deal of legacy. An inheritance from bygone eras. I see lots of overlap in the aviation and aerospace industries. There’re invitations in my in-box to events driven by IET[3], IMECHE, Royal Aero Society and various manufacturing, software, safety, and reliability organisations.

The variety of activities may serve specialist niches, but the overall effect is to dilute the impact the engineering community has on our society. Ever present change means that new specialist activities are arising all the time. It’s better to adapt and include these within existing technical institutions rather than invent new ones.

What’s the solution? There have been amalgamations in the past. Certainly, where there are significant overlaps between organisations then amalgamation maybe the best way forward.

There’s the case for sharing facilities. Having separate multiple technical libraries seems strange in the age of the connected device. Even sharing buildings needs to be explored.

Joint activities do happen but not to the extent that could fully exploit the opportunities that exits.

If the UK wishes to increase the number of competent engineers, it’s got to re-think the proliferation of different institutions, societies, associations, groupings, and licencing bodies.  

To elevate the professional status of engineering in our society we need organisations that have the scale and range to communicate and represent at all levels. Having said the above, I’m not hopeful of change. Too many vested interests are wedded to the status-quo. We have both the benefits of our Victorian past and the milestone of that grand legacy. 


[1] https://www.bma.org.uk/

[2] https://www.lawsociety.org.uk/en

[3] http://www.theiet.org/

Deregulation

There’s nothing wrong with making an argument for deregulation. What’s absurd is to make that argument as an unchallengeable dogma. It’s the irrationality of saying that deregulation is good, and regulation is bad, de-facto. This kind of unintelligent nonsense does permeate a particular type of right-wing political thinking. It pops it’s head up in a lot of Brexiters utterances. For advocates of Brexit their great goal is to throw away rules and lower standards. Mostly, this is for financial gain.

Let’s take some simple examples. The reasons for rules and regulations can often be found in recent history. Hazards are recognised and action is taken.

There’s still lead paint to be found in many older houses. There was a time when such paint was used on children’s toys. Toy safety has been a confusing area of law, and there have been several sets of regulations since the 1960. From our current perspective this past laxness seems insane, but such lead paint mixtures were commonplace. In fact, all sorts of toxic chemicals have been used in widely used paints.

I remember working in one factory building where a survey was done of the surrounding grounds. Outside certain windows there were small fluorescent flags placed at in the grass verges. They marked places where minor amounts of radiation had been detected. This came from discarded paint brushes and tins that had accumulated in the war years. At that time radioactive luminescent paint was used to paint aircraft instrument dials.

Any arguments for the deregulation of toxic chemicals in commonly used paints should be one that is quashed instantly. However, some deregulation fanatics are only to happy to endorse a loosening of the rules that protect the public from toxic chemicals.

One result of the loosening of public protection is often to put greater profits in the hands of unscrupulous industrialist. Across the globe there are numerous cases studies of this sad folly. Newspapers and political parties that push the line that rules, regulations and regulators, by their very nature are crushing our freedoms are as bad as those unscrupulous industrialists.

Yes, there’s a case to be made for pushing back over-regulation. There’s risks we are prepared to take where the risks are low, and the benefits are large. This is a matter for intelligent debate and not throwing around mindless slogans. We should not be cowed by loud voices from small corners of society intent on tearing down decades of learning and sound practical laws. I could come up with an encyclopaedic list of examples. Opponents rarely, if ever want to address a particular case since it’s much easier for them to thunder off sweeping assertions. Beware these siren voices.

NOTE: The Toys (Safety) Regulations 2011 implemented the requirements of Directive 2009/48/EC, whose purpose is to ensure a high level of toy safety.

Experts

The rate of increase in the power of artificial intelligence (AI) is matched by the rate of increase in the number of “experts” in the field. I’ve heard that jokingly said. 5-minutes on Twitter and it’s immediately apparent that off-the-shelf opinions run from – what’s all the fuss about? to Armageddon is just around the corner.

Being a bit of a stoic[1], I take the view that opinions are fine, but the question is what’s the reality? That doesn’t mean ignoring honest speculation, but that speculation should have some foundation in what’s known to be true. There’s plenty of emotive opinions that are wonderfully imaginative. Problem is that it doesn’t help us take the best steps forward when faced with monumental changes.

Today’s report is of the retirement of Dr Geoffrey Hinton from Google. Now, there’s a body of experience in working with AI. He warns that the technology is heading towards a state where it’s far more “intelligent” than humans. He’s raised the issue of “bad actors” using AI to the detriment of us all. These seem to me valid concerns from an experienced practitioner.

For decades, the prospect of a hive mind has peppered science fiction stories with tales of catastrophe. With good reason given that mind-to-mind interconnection is something that humans haven’t mastered. This is likely to be the highest risk and potential benefit. If machine learning can gain knowledge at phenomenal speeds from a vast diversity of sources, it becomes difficult to challenge. It’s not that AI will exhibit wisdom. It’s that its acquired information will give it the capability to develop, promote and sustain almost any opinion.

Let’s say the “bad actor” is a colourful politician of limited competence with a massive ego and ambition beyond reason. Sitting alongside, AI that can conjure-up brilliant speeches and strategies for beating opponents and that character can become dangerous.

So, to talk about AI as the most important inflection point in generations is not hype. In that respect the rapid progress of AI is like the invention of the explosive dynamite[2]. It changed the world in both positive and negative ways. Around the world countries have explosives laws and require licenses to manufacture, distribute, store, use, and possess explosives or its ingredients.

So far, mention of the regulation of AI makes people in power shudder. Some lawmakers are bigging-up a “light-touch” approach. Others are hunched over a table trying to put together threads of a regulatory regime[3] that will accentuate the positive and eliminate the negative[4].


[1] https://dailystoic.com/what-is-stoicism-a-definition-3-stoic-exercises-to-get-you-started/

[2] https://en.wikipedia.org/wiki/Dynamite

[3] https://digital-strategy.ec.europa.eu/en/policies/european-approach-artificial-intelligence

[4] https://youtu.be/JS_QoRdRD7k

AI awakens

Artificial Intelligence (AI)[1] is with us. Give it a question and it will answer you. Do it many times, with access to many information sources and it will improve its answer to you. That seems like a computer that can act like a human. In everyday reality, AI mimics a small number of the tasks that “intelligent” humans can do and do with little effort.

AI has a future. It could be immensely useful to humanity. As with other revolutions, it could take the drudgery out of administrative tasks, simple research, and well characterised human activities. One reaction to this is to joke that – I like the drudgery. Certainly, there’s work that could be classified as better done by machine but there’s pleasure to be had in doing that work.

AI will transform many industries but will it ever wake-up[2].  Will it ever become conscious.

A machine acting human is not the same as it becoming conscious. AI mimicking humans can give the appearance of being self-aware but it’s not. Digging deep inside the mechanism it remains a computational machine that knows nothing of its own existence.

We don’t know what it is that can give rise to consciousness. It’s a mystery how it happens within our own brains. It’s not a simple matter. It’s not magic either but it is a product of millions of years of evolution.

Humans learn from our senses. A vast quantity of experiences over millennia have shaped us. Not by our own choosing but by chance and circumstances. Fortunately, a degree of planetary stability has aided this growth from simple life to the complex creatures we are now.

One proposition is that complexity and conscious are linked. That is that conscious in a machine may arise from billions and billions of connections and experiences. It’s an emergent behaviour that arises at some unknown threshold. As such this proposition leaves us with a major dilemma. What if we inadvertently create conscious AI? What do we do at that moment?

Will it be an accidental event? There are far more questions than answers. No wonder there’s a call for more research[3].


[1] https://www.bbc.co.uk/newsround/49274918

[2] https://www2.deloitte.com/us/en/pages/consulting/articles/the-future-of-ai.html

[3] https://www.bbc.co.uk/news/technology-65401783.amp

Going backwards

I find it difficult to believe anyone who gets into their sixties and says that they have never had an accident. My latest isn’t original or without minor consequence. Yesterday morning a kind nurse gave me a bandage to hold two fingers straight.

At the start of the month my gardening efforts amounted to emptying pots, replanting pots, and moving pots. I’ve got far too many pots. Plants that had not survived the winter freeze were unceremoniously sent to the compost heap. Plants that looked like the spring was bringing them back to life were given a bit of pampering.

Sitting in the shade, one large square container held a small fir tree. The tree wasn’t in the best of health but remained well worth saving. What I was unhappy about was its position on the patio out the back of the house. So, it occurred to me that it was logical to move the container to a spot where the tree might flourish in future. The large square container was made of fiberglass but had the appearance of grey slate. It had been standing unhindered in one place for well over a year.

Now, you would think an engineer, like me, would know something about friction. Or in this case stiction, that is the friction that tends to stop stationary surfaces from easily moving. There are more than two ways I could have attempted to move this heavy garden container. One was to push and the other was to pull. I opted to pull and that was my big mistake.

I crouched down and with both hands pulled hard. The container was stubborn. Again, I pulled hard. Then without warning the side of the container gave way, and I went flying. Afterwards, I wish I had paid attention to what was behind me. I hit the ground awkwardly.

When adverse events like this happen, it’s as if time momentarily slows down. Naturally, it doesn’t but it feels like an out of body experience when there’s nothing you can do to stop the inevitable happening. That split second ended with me lying on my right side with my hand extended two steps down on the patio steps. My fall was broken by my backside and my right-hand middle finger.

Oh dear, this is going to hurt – that was my first thought as I lay on the hard ground looking back at the roots of the tree I was trying to save. Second thought was – why did I do that?

Knowledge with hindsight can be a universal blight. Sure, I wouldn’t have done what I did if I’d taken the time to think more deeply about all the possible consequences linked to moving heavy objects. In this case there was only me siting on the ground painfully recounting what happened. No one to say – are you crazy? You shouldn’t have pulled that part of that container.

Accidents are a part of life. Better they be minor. Better we learn from them every time.

Working hard for the money

What goes wrong with research spending? It’s a good question to ask. In some ways research spending is like advertising spending. “Half the money I spend on advertising is wasted; the trouble is I don’t know which half.[1]” Globally billions are spent on advertising so you might say – it must be working. In fact, far more is spent on advertising than is ever available for research in the aviation and aerospace world.

Research spending is a precious asset because of its bounds. Even so, a great deal of research spending is lost on activities that deliver no or little benefit. It’s true Governments, institutions and industry don’t often put-up funds for vague and imprecise aspirations or outlandish predictions but nevertheless money goes down a sink hole on far too many occasions.

A reluctance to take tough decisions or at the other extreme of the spectrum a relish in disruption plagues research funding decision making. Bad projects can live long lives and good projects get shut down before their time. My observations are that these are some of the cases that crop-up all too often across the world.

Continuing to service infrastructure that cost a great deal to set-up. It’s the classic problem of having spent large sums of money on something and thereby the desperation to see a benefit encourages more spending. Nobody likes to admit defeat or that their original predictions were way off the mark.

Circles of virtue are difficult to address. For example, everyone wants to see a more efficient and sustainable use of valuable airspace therefore critics of spending towards that objective are not heard. That is even if substantial spending is misdirected or hopelessly optimistic.

Glamourous and sexy subjects, often in the public limelight, get a leg-up when it come to the evaluation of potential research projects. Politicians love press photographs that associate them with something that looks like a solution in the public mind. Academics are no different in that respect.

Behold unto the gurus! There’s conferences and symposiums where ideas are hammered home by persuasive speakers and charismatic thinkers. Amongst these forums there are innovative ideas but also those that get more consideration than they warrant.

Narrow focused recommendations can distort funding decision making. With the best of intent an investigation or study group might highlight a deficiency that needs work, but it sits in a distinct niche of interest. It can be a push in direction the opposite of a Pareto analysis[2].

Highlighting these points is easier than fixing the underlying problems. It’s a good start to be aware of them before pen and ink meets, and a contract is signed.


[1] statement on advertising, credited to both John Wanamaker (1838-1922) and Lord Leverhulme (1851-1925).

[2] https://asq.org/quality-resources/pareto

Light touch folly

Light touch regulation. Now, there’s a senseless folly. It’s a green light to bad actors wherever they operate. It’s like building a medieval castle’s walls half as thick as planned to save money in the belief that enemies are too stupid to work it out. Saying that the public good far less important than the speed of developments is unwise to say the least.

The INTERNET arrived in the UK in the late 1980s. Now, it seems strange to recount. Clunky Personal Commuters (PCs) and basic e-mail were the hight of sophistication as we moved from an office of typewriters and Tipp-Ex to the simple word processor[1]. Generations will marvel at the primitive nature of our former working lives. Getting scissors and cutting out paper text and pasting it into a better place in a draft document. Tippexing out errors and scribbling notes in the spaces between sentences. Yet, that’s what we did when first certifying many of the commercial airliners in regular use across the globe (Boeing 777. Airbus A320). Desktop computers took centre stage early in the 1990s, but administrations were amid a transition. Clickable icons hit screens in 1990. Gradually and progressively new ways of working evolved.

Microsoft Windows 95 and the INTERNET were heralded as the dawn of a new age. Not much thought was given to PCs being used for criminal or malicious purposes. No more thought than the use of a typewriter to commit crime. That doesn’t mean such considerations were ignored it just means that they were deemed a lower-level importance.

In 2023, everyday there’s a new warning about scammers. Even fake warnings about scammers coming from scammers with the aim of scamming. Identifying whose real and whose a fake is becoming ever more difficult. Being asked to update subscriptions that were never opened in the first places is a good indicator that there’s some dirty work afoot. Notices that accounts are about to be blocked referring to accounts that don’t exist is another.

In 30-years the INTERNET has taken on the good and bad of the greater world. It hasn’t become a safer place. In fact, it’s become a bit like the Wild West[2].

Our digital space continues to evolve but has nowhere near reached its potential. It’s like those great western plains where waggons headed out looking for rich new lands. In any towns on the way the shop fronts are gleaming and inviting but if you look around the back there’s a desperate attempt to keep bad actors at bay.

Only a fraction of the suspicious, emails, texts, and messages get reported. People unconsciously pile up a digital legacy and rarely clean out the trash that accumulates. A rich messiness of personal information can lie hidden to the eyes but just bellow the digital surface.

When politicians and technocrats talk of “light touch regulation” it’s as if none of this matters. In the race to be first in technology, public protection is given a light touch. This can’t be a good way to go.


[1] Still available – Tipp-Ex Rapid, Correction Fluid Bottle, High Quality Correction Fluid, Excellent Coverage, 20ml, Pack of 3, white.

[2] https://en.wikipedia.org/wiki/American_frontier

Who’s in control?

The subject of artificial intelligence (AI) in an aircraft cockpit stirs-up reactions that are both passionate and pragmatic. Maybe, it’s a Marmite issue[1]. Mention of the subject triggers an instant judgement. 

Large passenger transport civil aircraft are flown by two human operators. Decisions are made by those two human operators. They are trained and acquire experience doing the job of flying. A word that has its origins in the marine world is used to describe their role – pilot.

One of my roles, early on in my career, was to lead the integration of a cockpit display system into a large new helicopter[2]. New, at the time. The design team, I was part of comprised of people with two different professional backgrounds. One had an engineering background, like me, and the other had qualification associated with psychology. The recognition that an aircraft cockpit is where the human and machine meet is not new. A lot of work was done in simulation with flight crews. 

The first generation of jet aircraft put the pilot in full-time command. It’s as we moved from purely mechanical interactions with aircraft, the balance of flight control has been shared between pilot and aircraft systems. There’s no doubt, in the numbers, that this has improved aviation safety.

Nobody is calling for the removal of aircraft autopilot systems. Much of the role of the formerly required flight engineer has been integrated into the aircraft systems. Information is compressed and summarised on flat screen displays in the aircraft cockpit.

Today, AI is not just one thing. There’s a myriad of different types and configurations, some of which are frozen and some of which are constantly changing as they learn and grow. That said, a flawless machine is a myth. Now, that’s a brave statement. We are generations away from a world where sentient machines produce ever better machines. It’s the stuff of SiFi.

As we have tried to make ever more capable machines, failures are a normal part of evolution. Those cycles of attempts and failures will need to lead into the billions and billions before human capabilities are fully matched. Yes, I know that’s an assertion, but it has taken humans more than a million years to get to have this discussion. That’s with our incredible brains.

What AI can do well is to enhance human capabilities[3]. Let’s say, of all the billions of combinations and permutations, an aircraft in flight can experience, a failure that is not expected, not trained, and not easily understood occurs. This is where the benefits and speed of AI can add a lot. Aircraft system using AI should be able to consider a massive number of potential scenarios and provide a selection of viable options to a flight crew. In time critical events AI can help.

The road where AI replaces a pilot in the cockpit is a dead end. The road where AI helps a pilot in managing a flight is well worth pursuing. Don’t set the goal at replacing humans. Set the goal at maximising the unique qualities of human capabilities.


[1] https://www.macmillandictionary.com/dictionary/british/marmite_2

[2] https://en.wikipedia.org/wiki/AgustaWestland_AW101

[3] https://hbr.org/2021/03/ai-should-augment-human-intelligence-not-replace-it

First Encounter

My first encounter with what could be classed as early Artificial Intelligence (AI) was a Dutch research project. It was around 2007. Let’s first note, a mathematical model isn’t pure AI, but it’s an example of a system that is trained on data.

It almost goes without saying that learning from accidents and incidents is a core part of the process to improve aviation safety. A key industry and regulatory goal is to understand what happened when things go wrong and to prevent a repetition of events.

Civil aviation is an extremely safe mode of transport. That said, because of the size of the global industry there are enough accidents and incidents worldwide to provide useful data on the historic safety record. Despite significant pre-COVID pandemic growth of civil aviation, the number of accidents is so low that further reduction in numbers is providing hard to win.

What if a system was developed that could look at all the historic aviation safety data and make a prediction as to what accidents might happen next?

The first challenge is the word “all” in that compiling such a comprehensive record of global aviation safety is a demanding task. It’s true that comprehensive databases do exist but even within these extremely valuable records there are errors, omissions, and summary information. 

There’s also the kick back that is often associated with record keeping. A system that demands detailed record keeping, of even the most minor incident can be burdensome. Yes, such record keeping has admirable objectives, but the “red tape” wrapped around its objectives can have negative effects.

Looking at past events has only one aim. That’s to now do things to prevent aviation accidents in the future. Once a significant comprehensive database exists then analysis can provide simple indicators that can provide clues as to what might happen next. Even basic mathematics can give us a trend line drawn through a set of key data points[1]. It’s effective but crude.

What if a prediction could take on-board all the global aviation safety data available, with the knowledge of how civil aviation works and mix it in such a way as to provide reliable predictions? This is prognostics. It’s a bit like the Delphi oracle[2]. The aviation “oracle” could be consulted about the state of affairs in respect of aviation safety. Dream? – maybe not.

The acronym CAT normally refers to large commercial air transport (CAT) aeroplanes. What this article is about is a Causal model for Air Transport Safety (CATS)[3]. This research project could be called an early use of “Big Data” in aviation safety work. However, as I understand it, the original aim was to make prognostics a reality.

Using Bayesian network-based causal models it was theorised that a map of aviation safety could be produced. Then it could be possible to predict the direction of travel for the future.

This type of quantification has a lot of merit. It has weaknesses, in that the Human Factor (HF) often defies prediction. However, as AI advances maybe causal modelling ought to be revised. New off-the-shelf tools could be used to look again at the craft of prediction.


[1] https://ec.europa.eu/eurostat/statistics-explained/index.php?title=Air_safety_statistics_in_the_EU

[2] https://www.history.com/topics/ancient-greece/delphi

[3] https://open.overheid.nl/documenten/ronl-archief-d5cd2dc7-c53f-4105-83c8-c1785dcb98c0/pdf