Working hard for the money

What goes wrong with research spending? It’s a good question to ask. In some ways research spending is like advertising spending. “Half the money I spend on advertising is wasted; the trouble is I don’t know which half.[1]” Globally billions are spent on advertising so you might say – it must be working. In fact, far more is spent on advertising than is ever available for research in the aviation and aerospace world.

Research spending is a precious asset because of its bounds. Even so, a great deal of research spending is lost on activities that deliver no or little benefit. It’s true Governments, institutions and industry don’t often put-up funds for vague and imprecise aspirations or outlandish predictions but nevertheless money goes down a sink hole on far too many occasions.

A reluctance to take tough decisions or at the other extreme of the spectrum a relish in disruption plagues research funding decision making. Bad projects can live long lives and good projects get shut down before their time. My observations are that these are some of the cases that crop-up all too often across the world.

Continuing to service infrastructure that cost a great deal to set-up. It’s the classic problem of having spent large sums of money on something and thereby the desperation to see a benefit encourages more spending. Nobody likes to admit defeat or that their original predictions were way off the mark.

Circles of virtue are difficult to address. For example, everyone wants to see a more efficient and sustainable use of valuable airspace therefore critics of spending towards that objective are not heard. That is even if substantial spending is misdirected or hopelessly optimistic.

Glamourous and sexy subjects, often in the public limelight, get a leg-up when it come to the evaluation of potential research projects. Politicians love press photographs that associate them with something that looks like a solution in the public mind. Academics are no different in that respect.

Behold unto the gurus! There’s conferences and symposiums where ideas are hammered home by persuasive speakers and charismatic thinkers. Amongst these forums there are innovative ideas but also those that get more consideration than they warrant.

Narrow focused recommendations can distort funding decision making. With the best of intent an investigation or study group might highlight a deficiency that needs work, but it sits in a distinct niche of interest. It can be a push in direction the opposite of a Pareto analysis[2].

Highlighting these points is easier than fixing the underlying problems. It’s a good start to be aware of them before pen and ink meets, and a contract is signed.


[1] statement on advertising, credited to both John Wanamaker (1838-1922) and Lord Leverhulme (1851-1925).

[2] https://asq.org/quality-resources/pareto

Coffee

Earwigging. Don’t be too proud. Everyone does it. Mostly by accident. Sit down in a coffee shop. A conversation wafts in your direction. It’s not as if we can close our ears and switch off the sound. A group chats about day-to-day trivia without any care for whose listening. Students in this town broadcast their thoughts about exams and summer jobs as loudly as a foghorn echo across the sea.

Conversations overheard can invite a stranger to join in. The British preoccupation with the weather is an excellent ice breaker. Not so much – turned out nice again[1] as when will it ever stop raining?

What’s the point of this scribbling? There I was sitting quietly minding my own business in the corner of a local coffee shop. Three friends turned up and sit next to me. Not that I knew them, but they were very amicable folk. There was plenty of space, but I must have been sitting in their favourite spot. It was light and had a good view of the comings and goings in the place. There was no doubt that all three friends were more than a decade older than me. That’s puts them somewhere in their mid to late 70s.

I wasn’t secretly listening. I didn’t have much choice but to overhear their pleasantries and general chit-chat. The flow of the conversation got me thinking. It wasn’t as if great pearls of wisdom where being exchanged between the three. It was more the context and points of reference that struck me.

Two had done national service[2]. That ended in 1963. There was mention of shooting and being shot at somewhere in the Arab world and a spell of time in Cyprus. Then followed reflections on waitress service in coffee shops and the famous people they had once met. Interspersed with medical complaints and both the good and bad of doctors encountered over the years.

What a huge gulf there is between the experiences of those whose youth was in the 1950s and 60s in contrast with people, like me, whose youth straddled the 1960s and 70s. The massive changes this country underwent in just over a decade are underestimated. I suppose it’s true to say that about any decade. What’s perhaps most striking is this country’s experience as it transformed over those years. Britain’s empire faded into the pages of history. Technology raced ahead relentlessly. Personal freedom grew but so did a sense of insecurity. Popular culture has exploded.

As the cycle of life turns, so each generation reflects on where they have been, what they have seen and how it has affected them along the way. There are more than a few of theories as to how this changes our society. Some say – not at all. Others say – profoundly. For me, I think the next decade will see a major political shift. What’s intriguing is the question – will the lessons we have learned amount to anything or not?


[1] http://www.screenonline.org.uk/film/id/1019246/index.html

[2] https://www.nam.ac.uk/explore/what-was-national-service

Light touch folly

Light touch regulation. Now, there’s a senseless folly. It’s a green light to bad actors wherever they operate. It’s like building a medieval castle’s walls half as thick as planned to save money in the belief that enemies are too stupid to work it out. Saying that the public good far less important than the speed of developments is unwise to say the least.

The INTERNET arrived in the UK in the late 1980s. Now, it seems strange to recount. Clunky Personal Commuters (PCs) and basic e-mail were the hight of sophistication as we moved from an office of typewriters and Tipp-Ex to the simple word processor[1]. Generations will marvel at the primitive nature of our former working lives. Getting scissors and cutting out paper text and pasting it into a better place in a draft document. Tippexing out errors and scribbling notes in the spaces between sentences. Yet, that’s what we did when first certifying many of the commercial airliners in regular use across the globe (Boeing 777. Airbus A320). Desktop computers took centre stage early in the 1990s, but administrations were amid a transition. Clickable icons hit screens in 1990. Gradually and progressively new ways of working evolved.

Microsoft Windows 95 and the INTERNET were heralded as the dawn of a new age. Not much thought was given to PCs being used for criminal or malicious purposes. No more thought than the use of a typewriter to commit crime. That doesn’t mean such considerations were ignored it just means that they were deemed a lower-level importance.

In 2023, everyday there’s a new warning about scammers. Even fake warnings about scammers coming from scammers with the aim of scamming. Identifying whose real and whose a fake is becoming ever more difficult. Being asked to update subscriptions that were never opened in the first places is a good indicator that there’s some dirty work afoot. Notices that accounts are about to be blocked referring to accounts that don’t exist is another.

In 30-years the INTERNET has taken on the good and bad of the greater world. It hasn’t become a safer place. In fact, it’s become a bit like the Wild West[2].

Our digital space continues to evolve but has nowhere near reached its potential. It’s like those great western plains where waggons headed out looking for rich new lands. In any towns on the way the shop fronts are gleaming and inviting but if you look around the back there’s a desperate attempt to keep bad actors at bay.

Only a fraction of the suspicious, emails, texts, and messages get reported. People unconsciously pile up a digital legacy and rarely clean out the trash that accumulates. A rich messiness of personal information can lie hidden to the eyes but just bellow the digital surface.

When politicians and technocrats talk of “light touch regulation” it’s as if none of this matters. In the race to be first in technology, public protection is given a light touch. This can’t be a good way to go.


[1] Still available – Tipp-Ex Rapid, Correction Fluid Bottle, High Quality Correction Fluid, Excellent Coverage, 20ml, Pack of 3, white.

[2] https://en.wikipedia.org/wiki/American_frontier

Pointless Brexit

Democracy’s malleable frame. I don’t recall the people of the UK being given a referendum on joining a trade block in the Pacific. Nice thou it is to have good relations with trading nations across the globe it seems strange that the other side of the world is seen as good and next door is seen as bad. It’s like a person looking through a telescope through the wrong end.

Back on 23rd June 2016, voters in the UK were asked if Britain should leave the EU. No one really knew what “leave” meant as all sorts of, what now turns out to be blatant lies were told to the public. The words “customs union” were not spoken in 2016. If they were it was in a tone of – don’t worry about all that, we hold all the cards, nothing will change.

Today, UK sectors from fishing to aviation, farming to science report being bogged down in ever more red tape, struggling to recruit staff, and racking up losses. Sure, Brexit is not the only trouble in the world, but it was avoidable unlike the pandemic and Putin’s war.

We (UK) became a country that imposed sanctions on itself. A unique situation in Europe. If some people are surprised, we have significant problems the really ought to examine what happened in 2016. It’s a textbook example of how not to do thing. The events will probably be taught in schools and universities for generations to come as a case of national self-harm.

Democracy is invaluable but when a government dilutes a massive question into a simple YES or NO, they dilute democracy too. It’s the territory that demigods thrive in. Mainly because this approach encourages the polarisation that then drives ever more outlandish claims about opponents. The truth gets buried under a hail of campaign propaganda, prejudice, and misinformation.

What Brexit has stimulated. A growth sector, I might say. Is the blame game. Now, when things go wrong, UK politicians can always blame those across the other side of the Channel. Standing on the cliffs in Dover its easy to survey the mess and point a finger out to sea.

If some people’s motivation for voting for Brexit was to control borders and stopping immigration the failures are so obvious that they hardly need to be pointed out. Yet, politicians persist with they myth that a solution is just around the corner if only UK laws were made ever more draconian. A heavier hand, criminalisation and the blame game are not solutions. These acts will merely continue the round of calamities and failures.

Brexit has unlocked a grand scale of idiocy. The solution is to consign this dogma to the past.

Who’s in control?

The subject of artificial intelligence (AI) in an aircraft cockpit stirs-up reactions that are both passionate and pragmatic. Maybe, it’s a Marmite issue[1]. Mention of the subject triggers an instant judgement. 

Large passenger transport civil aircraft are flown by two human operators. Decisions are made by those two human operators. They are trained and acquire experience doing the job of flying. A word that has its origins in the marine world is used to describe their role – pilot.

One of my roles, early on in my career, was to lead the integration of a cockpit display system into a large new helicopter[2]. New, at the time. The design team, I was part of comprised of people with two different professional backgrounds. One had an engineering background, like me, and the other had qualification associated with psychology. The recognition that an aircraft cockpit is where the human and machine meet is not new. A lot of work was done in simulation with flight crews. 

The first generation of jet aircraft put the pilot in full-time command. It’s as we moved from purely mechanical interactions with aircraft, the balance of flight control has been shared between pilot and aircraft systems. There’s no doubt, in the numbers, that this has improved aviation safety.

Nobody is calling for the removal of aircraft autopilot systems. Much of the role of the formerly required flight engineer has been integrated into the aircraft systems. Information is compressed and summarised on flat screen displays in the aircraft cockpit.

Today, AI is not just one thing. There’s a myriad of different types and configurations, some of which are frozen and some of which are constantly changing as they learn and grow. That said, a flawless machine is a myth. Now, that’s a brave statement. We are generations away from a world where sentient machines produce ever better machines. It’s the stuff of SiFi.

As we have tried to make ever more capable machines, failures are a normal part of evolution. Those cycles of attempts and failures will need to lead into the billions and billions before human capabilities are fully matched. Yes, I know that’s an assertion, but it has taken humans more than a million years to get to have this discussion. That’s with our incredible brains.

What AI can do well is to enhance human capabilities[3]. Let’s say, of all the billions of combinations and permutations, an aircraft in flight can experience, a failure that is not expected, not trained, and not easily understood occurs. This is where the benefits and speed of AI can add a lot. Aircraft system using AI should be able to consider a massive number of potential scenarios and provide a selection of viable options to a flight crew. In time critical events AI can help.

The road where AI replaces a pilot in the cockpit is a dead end. The road where AI helps a pilot in managing a flight is well worth pursuing. Don’t set the goal at replacing humans. Set the goal at maximising the unique qualities of human capabilities.


[1] https://www.macmillandictionary.com/dictionary/british/marmite_2

[2] https://en.wikipedia.org/wiki/AgustaWestland_AW101

[3] https://hbr.org/2021/03/ai-should-augment-human-intelligence-not-replace-it

First Encounter

My first encounter with what could be classed as early Artificial Intelligence (AI) was a Dutch research project. It was around 2007. Let’s first note, a mathematical model isn’t pure AI, but it’s an example of a system that is trained on data.

It almost goes without saying that learning from accidents and incidents is a core part of the process to improve aviation safety. A key industry and regulatory goal is to understand what happened when things go wrong and to prevent a repetition of events.

Civil aviation is an extremely safe mode of transport. That said, because of the size of the global industry there are enough accidents and incidents worldwide to provide useful data on the historic safety record. Despite significant pre-COVID pandemic growth of civil aviation, the number of accidents is so low that further reduction in numbers is providing hard to win.

What if a system was developed that could look at all the historic aviation safety data and make a prediction as to what accidents might happen next?

The first challenge is the word “all” in that compiling such a comprehensive record of global aviation safety is a demanding task. It’s true that comprehensive databases do exist but even within these extremely valuable records there are errors, omissions, and summary information. 

There’s also the kick back that is often associated with record keeping. A system that demands detailed record keeping, of even the most minor incident can be burdensome. Yes, such record keeping has admirable objectives, but the “red tape” wrapped around its objectives can have negative effects.

Looking at past events has only one aim. That’s to now do things to prevent aviation accidents in the future. Once a significant comprehensive database exists then analysis can provide simple indicators that can provide clues as to what might happen next. Even basic mathematics can give us a trend line drawn through a set of key data points[1]. It’s effective but crude.

What if a prediction could take on-board all the global aviation safety data available, with the knowledge of how civil aviation works and mix it in such a way as to provide reliable predictions? This is prognostics. It’s a bit like the Delphi oracle[2]. The aviation “oracle” could be consulted about the state of affairs in respect of aviation safety. Dream? – maybe not.

The acronym CAT normally refers to large commercial air transport (CAT) aeroplanes. What this article is about is a Causal model for Air Transport Safety (CATS)[3]. This research project could be called an early use of “Big Data” in aviation safety work. However, as I understand it, the original aim was to make prognostics a reality.

Using Bayesian network-based causal models it was theorised that a map of aviation safety could be produced. Then it could be possible to predict the direction of travel for the future.

This type of quantification has a lot of merit. It has weaknesses, in that the Human Factor (HF) often defies prediction. However, as AI advances maybe causal modelling ought to be revised. New off-the-shelf tools could be used to look again at the craft of prediction.


[1] https://ec.europa.eu/eurostat/statistics-explained/index.php?title=Air_safety_statistics_in_the_EU

[2] https://www.history.com/topics/ancient-greece/delphi

[3] https://open.overheid.nl/documenten/ronl-archief-d5cd2dc7-c53f-4105-83c8-c1785dcb98c0/pdf

Pause

An open letter has been published[1]. Not for the first time. It asks those working on Artificial Intelligence (AI) to take a deep breath and pause their work. It’s signed by AI experts and interested parties, like Elon Musk. This is a reaction to the competitive race to launch ever more powerful AI[2]. For all technology launches, it’s taking fewer and fewer years to get to a billion users. If the subject was genetic manipulation the case for a cautious step-by-step approach would be easily understood. However, the digital world, and its impact on our society’s organisation isn’t viewed as important as genetics. Genetically Modified (GM) crops got people excited and anxious. An artificially modified social and political landscape doesn’t seem to concern people quite so much. It maybe, the basis for this ambivalence is a false view that we are more in control of one as opposed to the other. It’s more likely this ambivalence stems from a lack of knowledge. One response to the open letter[3] I saw was thus: A lot of fearmongering luddites here! People were making similar comments about the pocket calculator at one time! This is to totally misunderstand what is going on with the rapid advance of AI. I think, the impact on society of the proliferation of AI will be greater than that of the invention of the internet. It will change the way we work, rest and play. It will do it at remarkable speed. We face an unprecedented challenge. I’m not for one moment advocating a regulatory regime that is driven by societal puritans. The open letter is not proposing a ban. What’s needed is a regulatory regime that can moderate aggressive advances so that knowledge can be acquired about the impacts of AI. Yesterday, a government policy was launched in the UK. The problem with saying that there will be no new regulators and regulators will need to act within existing powers is obvious. It’s a diversion of resources away from exiting priorities to address challenging new priorities. That, in of itself is not an original regulatory dilemma. It could be said, that’s why we have sewage pouring into rivers up and down the UK. In an interview, Conservative Minister Paul Scully MP mentioned sandboxing as a means of complying with policy. This is to create a “safe space” to try out a new AI system before launching it on the world. It’s a method of testing and trials that is useful to gain an understanding of conventional complex systems. The reason this is not easily workable for AI is that it’s not possible to build enough confidence that AI will be safe, secure and perform its intended function without running it live. For useful AI systems, even the slightest change in the start-up conditions or training can produce drastically different outcomes. A live AI system can be like shifting sand. It will build up a structure to solve problems, and do it well, but the characteristics of its internal workings will vary significantly from one similar system to another. Thus, the AI system’s workings, as they are run through a sandbox exercise may be unlike the same system’s workings running live. Which leads to the question – what confidence can a regulator, with an approval authority, have in a sandbox version of an AI system? Pause. Count to ten and work out what impacts we must avoid. And how to do it.

Policy & AI

Today, the UK Government published an approach to Artificial Intelligence (AI)[1]. It’s in the form of a white paper. That’s a policy document creäte by the Government that sets out their proposals for future legislation.

This is a big step. Artificial Intelligence (AI) attracts both optimism and pessimism. Utopia and dystopia. There are a lot more people who sit in these opposing camps as there are who sit in the middle. It’s big. Unlike any technology that has been introduce to the whole populous.

On Friday last, I caught the film iRobot (2004)[2] showing early evening on Film 4. It’s difficult to believe this science fiction is nearly 20-years old and the short story of Isaac Asimov’s, on which it’s based is from the 1950s. AI is a fertile space for the imagination to range over a vast space.

Fictional speculation about AI has veered towards the dystopian end of the scale. Although that’s not the whole story by far. One example of good AI is the sentient android in the Star Trek universe. The android “Data” based on the USS Enterprise, strives to help humanity and be more like us. His attempt to understand human emotions are often significant plot points. He’s a useful counterpoint to evil alien intelligent machines that predictably aim to destroy us all.

Where fiction helps is to give an airing to lots of potential scenarios for the future. That’s not trivial. Policy on this rapidly advancing subject should not be narrowly based or dogmatic.

Where there isn’t a great debate is the high-level objectives that society should endeavour to achieve. We want technology to do no harm. We want technology to be trustworthy. We want technology to be understandable.

Yet, we know from experience, that meeting these objectives is much harder than asserting them. Politicians love to assert. In the practical world, it’s public regulators who will have to wrestle with the ambitions of industry, unforeseen outcomes, and negative public reactions.

Using the words “world leading” successively is no substitute for resourcing regulators to beef-up their capabilities when faced with rapid change. Vague and superficial speeches are fine in context. Afterall, there’s a job to be done maintaining public confidence in this revolutionary technology.

What’s evident is that we should not delude ourselves. This technical transformation is unlike any we have so far encountered. It’s radical nature and speed mean that even when Government and industry work together they are still going to be behind the curve.

As a fictional speculation an intelligent android who serves as a senior officer aboard a star ship is old school. Now, I wonder what we would make of an intelligent android standing for election and becoming a Member of Parliament?


[1] The UK’s AI Regulation white paper will be published on Wednesday, 29 March 2023. Organisations and individuals involved in the AI sector will be encouraged to provide feedback on the white paper through a consultation which launches today and will run until Tuesday, 21 June 2023.

[2] https://en.wikipedia.org/wiki/I,_Robot_(film)

Progress?

For 99p in a well-known charity book shop, I picked up a tidy little paperback book. It’s wonderfully illustrated, mixing humour with one or two earnest thoughts. Originally, it would have been about 3 shillings[1] (15 new pence) to buy. So, I may have paid over the odds.

Was C. Northcote Parkinson[2], right? Certainly, when I listen to the epic tale on HS2[3] it does get me wondering if Parkinson’s Law works as well in the 2020s as it did in the late 1950s. Progress is slow, as work expands. The more there is to do, the more there is to do.

The UK’s number one railway project, High Speed Two, HS2 is a massive project. It’s image of yellow jacketed workers stomping across chewed-up fields is a long way from the reality. In the back rooms and offices are thousands of planners, managers, and administrators toiling intensely. Politicians posture over reams of reports and change their minds at every juncture. There’s a hitch every week.

Given my experiences, I should be able to make some judgements about Parkinson’s Law. That is to say that: work expands so as to fill the time available for its completion. It’s generally associated with Government administration and the operation of a civil service. My observation is that large scale industry is just as guilty of this characteristic. 

It can be said that a large aircraft could not be certified until the pile of paper needed to do so weighted as much as the finished product. This tong-in-cheek saying stems from the frustration that builds-up when progress is slower than people would like it to be. What a “pile of paper” means in the digital world is more difficult to ascertain but it’s a lot of stuff. 

Whatever the merit of Parkinson’s Law, the arguments made for it have been undermined as employment practices have changed dramatically since the 1950s. Internal structures of bureaucratic and deep hierarchical organisation are no longer the fashion. The whole phenomena of Buggins’ turn[4] still exists but is in abeyance. Much of industry may have shaken it off, but the political world still clings-on and offers jobs on seniority rather than by merit. Hierarchical organisations that feed on a certainty of their continued existence remain plentiful, but they are now more subject to more disruption.

Parkinson does mock the large organisations of his time. Some of his anecdotes resonate perfectly with the world of the 2020s. These are observations of human behaviour.

One that rings a bell with me is the description of a board meeting were agenda items are methodically addressed in order. Let’s say, the subject of item 9 on an agenda is for a major investment expenditure and the next item, item 10 addresses staff car parking spaces. No prizes for guessing which one gets the most discussion time. When faced with complex financial arguments and detailed pages of figures there’s a tendance to defer to those who know about that sort of stuff. When faced with a subject that everyone understands and impacts everyone in an obvious way, the temptation to engage in discussion about the later is overwhelming.

Let’s conclude that progress that doesn’t take account of the human factor is going to hit the rails or maybe worse.


[1] https://www.royalmintmuseum.org.uk/journal/curators-corner/shilling/

[2] Parkinson’s Law or The Pursuit of Progress, John Murry Paperbacks, 1958.

[3] https://www.hs2.org.uk/

[4] https://english.stackexchange.com/questions/171256/who-was-buggins-of-buggins-turn

Good enough

It’s not a universal rule. What is? There are a million and one ways that both good and bad things can happen in life. A million is way under any genuine calculation. Slight changes in decisions that are made can head us off in a completely different direction. So much fiction is based on this reality.

Yes, I have watched “Everything Everywhere All at Once[1]”. I’m in two minds about my reaction. There’s no doubt that it has an original take on the theory of multiple universes and how they might interact. It surprised me in just how much comedy formed the core of the film. There are moments when the pace of the story left me wondering where on earth is this going? Overall, it is an enjoyable movie and its great to see such originality and imagination.

This strange notion of multiple universes, numbered beyond count, has an appeal but it’s more than a headful. What I mean is that trying to imagine what it looks like, if such a thing is possible, is almost hopeless. What I liked about the movie is that small difference are more probable and large difference are far less probable. So, to get to the worlds that are radically different from where you are it’s necessary to do something extremely improbable.

Anyway, that’s not what I’m writing about this morning. I’ve just been reading a bit about Sir Robert Alexander Watson Watt. The man credited with giving us radar technology.

Perfect is the enemy of good is a dictum that’s has several attributions. It keeps coming up. Some people celebrate those who strive for perfection. However, in human affairs, perfection, is an extremely improbable outcome in most situations. There’s a lot of talent and perspiration needed to jump from average to perfect in any walk of life.

What the dictum above shorthand’s is that throwing massive amounts of effort at a problem can prevent a good outcome. Striving for perfection, faced with our human condition, can be a negative.

That fits well with me. My experience of research, design and development suggested the value of incremental improvement and not waiting for perfect answers to arise from ever more work. It’s the problem with research funding. Every paper calls for more research to be done.

In aviation safety work the Pareto principle is invaluable. It can be explained by a ghastly Americanisms. Namely, let’s address the “low hanging fruit” first. In other words, let’s make the easiest improvements, that produce the biggest differences, first.

I’m right on-board with Robert Watson-Watt and his “cult of the imperfect”. He’s quoted saying: “Give them the third best to go on with; the second best comes too late, the best never comes”. It’s to say do enough of what works now without agonising over all the other possible better ways. Don’t procrastinate (too much).


[1] https://www.imdb.com/title/tt6710474/