Light touch folly

Light touch regulation. Now, there’s a senseless folly. It’s a green light to bad actors wherever they operate. It’s like building a medieval castle’s walls half as thick as planned to save money in the belief that enemies are too stupid to work it out. Saying that the public good far less important than the speed of developments is unwise to say the least.

The INTERNET arrived in the UK in the late 1980s. Now, it seems strange to recount. Clunky Personal Commuters (PCs) and basic e-mail were the hight of sophistication as we moved from an office of typewriters and Tipp-Ex to the simple word processor[1]. Generations will marvel at the primitive nature of our former working lives. Getting scissors and cutting out paper text and pasting it into a better place in a draft document. Tippexing out errors and scribbling notes in the spaces between sentences. Yet, that’s what we did when first certifying many of the commercial airliners in regular use across the globe (Boeing 777. Airbus A320). Desktop computers took centre stage early in the 1990s, but administrations were amid a transition. Clickable icons hit screens in 1990. Gradually and progressively new ways of working evolved.

Microsoft Windows 95 and the INTERNET were heralded as the dawn of a new age. Not much thought was given to PCs being used for criminal or malicious purposes. No more thought than the use of a typewriter to commit crime. That doesn’t mean such considerations were ignored it just means that they were deemed a lower-level importance.

In 2023, everyday there’s a new warning about scammers. Even fake warnings about scammers coming from scammers with the aim of scamming. Identifying whose real and whose a fake is becoming ever more difficult. Being asked to update subscriptions that were never opened in the first places is a good indicator that there’s some dirty work afoot. Notices that accounts are about to be blocked referring to accounts that don’t exist is another.

In 30-years the INTERNET has taken on the good and bad of the greater world. It hasn’t become a safer place. In fact, it’s become a bit like the Wild West[2].

Our digital space continues to evolve but has nowhere near reached its potential. It’s like those great western plains where waggons headed out looking for rich new lands. In any towns on the way the shop fronts are gleaming and inviting but if you look around the back there’s a desperate attempt to keep bad actors at bay.

Only a fraction of the suspicious, emails, texts, and messages get reported. People unconsciously pile up a digital legacy and rarely clean out the trash that accumulates. A rich messiness of personal information can lie hidden to the eyes but just bellow the digital surface.

When politicians and technocrats talk of “light touch regulation” it’s as if none of this matters. In the race to be first in technology, public protection is given a light touch. This can’t be a good way to go.

[1] Still available – Tipp-Ex Rapid, Correction Fluid Bottle, High Quality Correction Fluid, Excellent Coverage, 20ml, Pack of 3, white.


Pointless Brexit

Democracy’s malleable frame. I don’t recall the people of the UK being given a referendum on joining a trade block in the Pacific. Nice thou it is to have good relations with trading nations across the globe it seems strange that the other side of the world is seen as good and next door is seen as bad. It’s like a person looking through a telescope through the wrong end.

Back on 23rd June 2016, voters in the UK were asked if Britain should leave the EU. No one really knew what “leave” meant as all sorts of, what now turns out to be blatant lies were told to the public. The words “customs union” were not spoken in 2016. If they were it was in a tone of – don’t worry about all that, we hold all the cards, nothing will change.

Today, UK sectors from fishing to aviation, farming to science report being bogged down in ever more red tape, struggling to recruit staff, and racking up losses. Sure, Brexit is not the only trouble in the world, but it was avoidable unlike the pandemic and Putin’s war.

We (UK) became a country that imposed sanctions on itself. A unique situation in Europe. If some people are surprised, we have significant problems the really ought to examine what happened in 2016. It’s a textbook example of how not to do thing. The events will probably be taught in schools and universities for generations to come as a case of national self-harm.

Democracy is invaluable but when a government dilutes a massive question into a simple YES or NO, they dilute democracy too. It’s the territory that demigods thrive in. Mainly because this approach encourages the polarisation that then drives ever more outlandish claims about opponents. The truth gets buried under a hail of campaign propaganda, prejudice, and misinformation.

What Brexit has stimulated. A growth sector, I might say. Is the blame game. Now, when things go wrong, UK politicians can always blame those across the other side of the Channel. Standing on the cliffs in Dover its easy to survey the mess and point a finger out to sea.

If some people’s motivation for voting for Brexit was to control borders and stopping immigration the failures are so obvious that they hardly need to be pointed out. Yet, politicians persist with they myth that a solution is just around the corner if only UK laws were made ever more draconian. A heavier hand, criminalisation and the blame game are not solutions. These acts will merely continue the round of calamities and failures.

Brexit has unlocked a grand scale of idiocy. The solution is to consign this dogma to the past.

Who’s in control?

The subject of artificial intelligence (AI) in an aircraft cockpit stirs-up reactions that are both passionate and pragmatic. Maybe, it’s a Marmite issue[1]. Mention of the subject triggers an instant judgement. 

Large passenger transport civil aircraft are flown by two human operators. Decisions are made by those two human operators. They are trained and acquire experience doing the job of flying. A word that has its origins in the marine world is used to describe their role – pilot.

One of my roles, early on in my career, was to lead the integration of a cockpit display system into a large new helicopter[2]. New, at the time. The design team, I was part of comprised of people with two different professional backgrounds. One had an engineering background, like me, and the other had qualification associated with psychology. The recognition that an aircraft cockpit is where the human and machine meet is not new. A lot of work was done in simulation with flight crews. 

The first generation of jet aircraft put the pilot in full-time command. It’s as we moved from purely mechanical interactions with aircraft, the balance of flight control has been shared between pilot and aircraft systems. There’s no doubt, in the numbers, that this has improved aviation safety.

Nobody is calling for the removal of aircraft autopilot systems. Much of the role of the formerly required flight engineer has been integrated into the aircraft systems. Information is compressed and summarised on flat screen displays in the aircraft cockpit.

Today, AI is not just one thing. There’s a myriad of different types and configurations, some of which are frozen and some of which are constantly changing as they learn and grow. That said, a flawless machine is a myth. Now, that’s a brave statement. We are generations away from a world where sentient machines produce ever better machines. It’s the stuff of SiFi.

As we have tried to make ever more capable machines, failures are a normal part of evolution. Those cycles of attempts and failures will need to lead into the billions and billions before human capabilities are fully matched. Yes, I know that’s an assertion, but it has taken humans more than a million years to get to have this discussion. That’s with our incredible brains.

What AI can do well is to enhance human capabilities[3]. Let’s say, of all the billions of combinations and permutations, an aircraft in flight can experience, a failure that is not expected, not trained, and not easily understood occurs. This is where the benefits and speed of AI can add a lot. Aircraft system using AI should be able to consider a massive number of potential scenarios and provide a selection of viable options to a flight crew. In time critical events AI can help.

The road where AI replaces a pilot in the cockpit is a dead end. The road where AI helps a pilot in managing a flight is well worth pursuing. Don’t set the goal at replacing humans. Set the goal at maximising the unique qualities of human capabilities.




First Encounter

My first encounter with what could be classed as early Artificial Intelligence (AI) was a Dutch research project. It was around 2007. Let’s first note, a mathematical model isn’t pure AI, but it’s an example of a system that is trained on data.

It almost goes without saying that learning from accidents and incidents is a core part of the process to improve aviation safety. A key industry and regulatory goal is to understand what happened when things go wrong and to prevent a repetition of events.

Civil aviation is an extremely safe mode of transport. That said, because of the size of the global industry there are enough accidents and incidents worldwide to provide useful data on the historic safety record. Despite significant pre-COVID pandemic growth of civil aviation, the number of accidents is so low that further reduction in numbers is providing hard to win.

What if a system was developed that could look at all the historic aviation safety data and make a prediction as to what accidents might happen next?

The first challenge is the word “all” in that compiling such a comprehensive record of global aviation safety is a demanding task. It’s true that comprehensive databases do exist but even within these extremely valuable records there are errors, omissions, and summary information. 

There’s also the kick back that is often associated with record keeping. A system that demands detailed record keeping, of even the most minor incident can be burdensome. Yes, such record keeping has admirable objectives, but the “red tape” wrapped around its objectives can have negative effects.

Looking at past events has only one aim. That’s to now do things to prevent aviation accidents in the future. Once a significant comprehensive database exists then analysis can provide simple indicators that can provide clues as to what might happen next. Even basic mathematics can give us a trend line drawn through a set of key data points[1]. It’s effective but crude.

What if a prediction could take on-board all the global aviation safety data available, with the knowledge of how civil aviation works and mix it in such a way as to provide reliable predictions? This is prognostics. It’s a bit like the Delphi oracle[2]. The aviation “oracle” could be consulted about the state of affairs in respect of aviation safety. Dream? – maybe not.

The acronym CAT normally refers to large commercial air transport (CAT) aeroplanes. What this article is about is a Causal model for Air Transport Safety (CATS)[3]. This research project could be called an early use of “Big Data” in aviation safety work. However, as I understand it, the original aim was to make prognostics a reality.

Using Bayesian network-based causal models it was theorised that a map of aviation safety could be produced. Then it could be possible to predict the direction of travel for the future.

This type of quantification has a lot of merit. It has weaknesses, in that the Human Factor (HF) often defies prediction. However, as AI advances maybe causal modelling ought to be revised. New off-the-shelf tools could be used to look again at the craft of prediction.





An open letter has been published[1]. Not for the first time. It asks those working on Artificial Intelligence (AI) to take a deep breath and pause their work. It’s signed by AI experts and interested parties, like Elon Musk. This is a reaction to the competitive race to launch ever more powerful AI[2]. For all technology launches, it’s taking fewer and fewer years to get to a billion users. If the subject was genetic manipulation the case for a cautious step-by-step approach would be easily understood. However, the digital world, and its impact on our society’s organisation isn’t viewed as important as genetics. Genetically Modified (GM) crops got people excited and anxious. An artificially modified social and political landscape doesn’t seem to concern people quite so much. It maybe, the basis for this ambivalence is a false view that we are more in control of one as opposed to the other. It’s more likely this ambivalence stems from a lack of knowledge. One response to the open letter[3] I saw was thus: A lot of fearmongering luddites here! People were making similar comments about the pocket calculator at one time! This is to totally misunderstand what is going on with the rapid advance of AI. I think, the impact on society of the proliferation of AI will be greater than that of the invention of the internet. It will change the way we work, rest and play. It will do it at remarkable speed. We face an unprecedented challenge. I’m not for one moment advocating a regulatory regime that is driven by societal puritans. The open letter is not proposing a ban. What’s needed is a regulatory regime that can moderate aggressive advances so that knowledge can be acquired about the impacts of AI. Yesterday, a government policy was launched in the UK. The problem with saying that there will be no new regulators and regulators will need to act within existing powers is obvious. It’s a diversion of resources away from exiting priorities to address challenging new priorities. That, in of itself is not an original regulatory dilemma. It could be said, that’s why we have sewage pouring into rivers up and down the UK. In an interview, Conservative Minister Paul Scully MP mentioned sandboxing as a means of complying with policy. This is to create a “safe space” to try out a new AI system before launching it on the world. It’s a method of testing and trials that is useful to gain an understanding of conventional complex systems. The reason this is not easily workable for AI is that it’s not possible to build enough confidence that AI will be safe, secure and perform its intended function without running it live. For useful AI systems, even the slightest change in the start-up conditions or training can produce drastically different outcomes. A live AI system can be like shifting sand. It will build up a structure to solve problems, and do it well, but the characteristics of its internal workings will vary significantly from one similar system to another. Thus, the AI system’s workings, as they are run through a sandbox exercise may be unlike the same system’s workings running live. Which leads to the question – what confidence can a regulator, with an approval authority, have in a sandbox version of an AI system? Pause. Count to ten and work out what impacts we must avoid. And how to do it.

Policy & AI

Today, the UK Government published an approach to Artificial Intelligence (AI)[1]. It’s in the form of a white paper. That’s a policy document creäte by the Government that sets out their proposals for future legislation.

This is a big step. Artificial Intelligence (AI) attracts both optimism and pessimism. Utopia and dystopia. There are a lot more people who sit in these opposing camps as there are who sit in the middle. It’s big. Unlike any technology that has been introduce to the whole populous.

On Friday last, I caught the film iRobot (2004)[2] showing early evening on Film 4. It’s difficult to believe this science fiction is nearly 20-years old and the short story of Isaac Asimov’s, on which it’s based is from the 1950s. AI is a fertile space for the imagination to range over a vast space.

Fictional speculation about AI has veered towards the dystopian end of the scale. Although that’s not the whole story by far. One example of good AI is the sentient android in the Star Trek universe. The android “Data” based on the USS Enterprise, strives to help humanity and be more like us. His attempt to understand human emotions are often significant plot points. He’s a useful counterpoint to evil alien intelligent machines that predictably aim to destroy us all.

Where fiction helps is to give an airing to lots of potential scenarios for the future. That’s not trivial. Policy on this rapidly advancing subject should not be narrowly based or dogmatic.

Where there isn’t a great debate is the high-level objectives that society should endeavour to achieve. We want technology to do no harm. We want technology to be trustworthy. We want technology to be understandable.

Yet, we know from experience, that meeting these objectives is much harder than asserting them. Politicians love to assert. In the practical world, it’s public regulators who will have to wrestle with the ambitions of industry, unforeseen outcomes, and negative public reactions.

Using the words “world leading” successively is no substitute for resourcing regulators to beef-up their capabilities when faced with rapid change. Vague and superficial speeches are fine in context. Afterall, there’s a job to be done maintaining public confidence in this revolutionary technology.

What’s evident is that we should not delude ourselves. This technical transformation is unlike any we have so far encountered. It’s radical nature and speed mean that even when Government and industry work together they are still going to be behind the curve.

As a fictional speculation an intelligent android who serves as a senior officer aboard a star ship is old school. Now, I wonder what we would make of an intelligent android standing for election and becoming a Member of Parliament?

[1] The UK’s AI Regulation white paper will be published on Wednesday, 29 March 2023. Organisations and individuals involved in the AI sector will be encouraged to provide feedback on the white paper through a consultation which launches today and will run until Tuesday, 21 June 2023.



For 99p in a well-known charity book shop, I picked up a tidy little paperback book. It’s wonderfully illustrated, mixing humour with one or two earnest thoughts. Originally, it would have been about 3 shillings[1] (15 new pence) to buy. So, I may have paid over the odds.

Was C. Northcote Parkinson[2], right? Certainly, when I listen to the epic tale on HS2[3] it does get me wondering if Parkinson’s Law works as well in the 2020s as it did in the late 1950s. Progress is slow, as work expands. The more there is to do, the more there is to do.

The UK’s number one railway project, High Speed Two, HS2 is a massive project. It’s image of yellow jacketed workers stomping across chewed-up fields is a long way from the reality. In the back rooms and offices are thousands of planners, managers, and administrators toiling intensely. Politicians posture over reams of reports and change their minds at every juncture. There’s a hitch every week.

Given my experiences, I should be able to make some judgements about Parkinson’s Law. That is to say that: work expands so as to fill the time available for its completion. It’s generally associated with Government administration and the operation of a civil service. My observation is that large scale industry is just as guilty of this characteristic. 

It can be said that a large aircraft could not be certified until the pile of paper needed to do so weighted as much as the finished product. This tong-in-cheek saying stems from the frustration that builds-up when progress is slower than people would like it to be. What a “pile of paper” means in the digital world is more difficult to ascertain but it’s a lot of stuff. 

Whatever the merit of Parkinson’s Law, the arguments made for it have been undermined as employment practices have changed dramatically since the 1950s. Internal structures of bureaucratic and deep hierarchical organisation are no longer the fashion. The whole phenomena of Buggins’ turn[4] still exists but is in abeyance. Much of industry may have shaken it off, but the political world still clings-on and offers jobs on seniority rather than by merit. Hierarchical organisations that feed on a certainty of their continued existence remain plentiful, but they are now more subject to more disruption.

Parkinson does mock the large organisations of his time. Some of his anecdotes resonate perfectly with the world of the 2020s. These are observations of human behaviour.

One that rings a bell with me is the description of a board meeting were agenda items are methodically addressed in order. Let’s say, the subject of item 9 on an agenda is for a major investment expenditure and the next item, item 10 addresses staff car parking spaces. No prizes for guessing which one gets the most discussion time. When faced with complex financial arguments and detailed pages of figures there’s a tendance to defer to those who know about that sort of stuff. When faced with a subject that everyone understands and impacts everyone in an obvious way, the temptation to engage in discussion about the later is overwhelming.

Let’s conclude that progress that doesn’t take account of the human factor is going to hit the rails or maybe worse.


[2] Parkinson’s Law or The Pursuit of Progress, John Murry Paperbacks, 1958.



Good enough

It’s not a universal rule. What is? There are a million and one ways that both good and bad things can happen in life. A million is way under any genuine calculation. Slight changes in decisions that are made can head us off in a completely different direction. So much fiction is based on this reality.

Yes, I have watched “Everything Everywhere All at Once[1]”. I’m in two minds about my reaction. There’s no doubt that it has an original take on the theory of multiple universes and how they might interact. It surprised me in just how much comedy formed the core of the film. There are moments when the pace of the story left me wondering where on earth is this going? Overall, it is an enjoyable movie and its great to see such originality and imagination.

This strange notion of multiple universes, numbered beyond count, has an appeal but it’s more than a headful. What I mean is that trying to imagine what it looks like, if such a thing is possible, is almost hopeless. What I liked about the movie is that small difference are more probable and large difference are far less probable. So, to get to the worlds that are radically different from where you are it’s necessary to do something extremely improbable.

Anyway, that’s not what I’m writing about this morning. I’ve just been reading a bit about Sir Robert Alexander Watson Watt. The man credited with giving us radar technology.

Perfect is the enemy of good is a dictum that’s has several attributions. It keeps coming up. Some people celebrate those who strive for perfection. However, in human affairs, perfection, is an extremely improbable outcome in most situations. There’s a lot of talent and perspiration needed to jump from average to perfect in any walk of life.

What the dictum above shorthand’s is that throwing massive amounts of effort at a problem can prevent a good outcome. Striving for perfection, faced with our human condition, can be a negative.

That fits well with me. My experience of research, design and development suggested the value of incremental improvement and not waiting for perfect answers to arise from ever more work. It’s the problem with research funding. Every paper calls for more research to be done.

In aviation safety work the Pareto principle is invaluable. It can be explained by a ghastly Americanisms. Namely, let’s address the “low hanging fruit” first. In other words, let’s make the easiest improvements, that produce the biggest differences, first.

I’m right on-board with Robert Watson-Watt and his “cult of the imperfect”. He’s quoted saying: “Give them the third best to go on with; the second best comes too late, the best never comes”. It’s to say do enough of what works now without agonising over all the other possible better ways. Don’t procrastinate (too much).


Radio on the hill

We take radio for granted. I’m listening to it, now. That magic of information transferred through the “ether[1]” at the speed of light and without wires. This mystery was unravelled first in the 19th century. Experimentation and mathematics provided insights into electromagnetics.

The practical applications of radio waves were soon recognised. The possibility of fast information transfer between A and B had implications for the communications and the battlefield.

It’s unfortunate to say that warfare often causes science to advance rapidly. The urgency to understand more is driven by strong needs. That phrase “needs must” comes to mind. We experienced this during the COVID pandemic. Science accelerated to meet the challenge.

It wasn’t until after he failed as an artist that Samuel Morse transformed communications by inventing the telegraph with his dots and dashes. There’s a telegraph gallery with a reproductions of Morse’s early equipment at the Locust Grove Estate[2] in Poughkeepsie. I’d recommend it.

The electromagnetic telegraph used wires to connect A and B. Clearly, that’s not useful if the aim is to connect an aircraft with the ground.

The imperative to make air-ground communication possible came from the first world war. Aviation’s role in warfare came to the fore. Not just in surveillance of the enemy but offensive actions too. Experimentation with airborne radio involved heavy batteries and early spark transmitters. Making such crude equipment usable was an immense challenge. 

Why am I writing about this subject? This week, on a whim I visited the museum at Biggen Hill. The Biggin Hill Museum[3] tells the story the pivotal role played by the fighter station in the second world war. The lesser-known story is the origins of the station.

It’s one of Britain’s oldest aerodromes and sits high up on the hills south of London. Biggin Hill is one of the highest points in that area, rising to over 210 metres (690 ft) above sea level. 

It’s transformation from agricultural fields to a research station (south camp) took place in 1916 and 1917. Its purpose was to explore the scientific and technical innovations of that time. Wireless in particular.  141 Squadron of the Royal Flying Corps (RFC) was based at Biggin Hill and equipped with Bristol Fighters.[9] RFC were the first to take use of wireless telegraphy to assist with artillery targeting.

These were the years before the Royal Air Force (RAF) was formed.

100 years later, in early 2019, the Biggin Hill Museum opened its doors to the public. It’s a small museum but well worth a visit. I found the stories of the early development of airborne radio communications fascinating. So much we take for granted had to be invented, tested, and developed from the most elemental components.

POST 1: Now, I wish I’d be able to attand this lecture – Isle of Wight Branch: The Development of Airborne Wireless for the R.F.C. (

POST 2: The bigger story




Alexander Boris de Pfeffel Johnson

There’s a good argument for boring politics. Yes, it’s reasonable to get aerated about big choices and fundamental differences in belief. However, a lot of politics is implementing policy and taking corrective action when something goes wrong. For the bigger part of practical politics, the qualities of attention to detail and diplomacy are of paramount importance. One thing we know for certain is that we got the exact opposite from former British Prime Minister (PM) Boris Johnson[1]. Gesticulation and flowery language took the place of thoughtfulness, care, and compassion.

Johnson denies lying to the UK Parliament. He once revelled in his performances at the dispatch box in the House of Commons (HoC). His period as UK PM was turbulent and full to the brim with bullish rhetoric. There’s no doubt that there’s an audience who laps up those political theatrics.

In the promotion world, adverts are supposed to be “legal, decent, honest and truthful.” In the political world, it would be asking a lot for all four of those to be observed all the time.

One place where there’s an extremely high expectation that a PM will be honest and truthful is while they are standing at the dispatch box[2] in the HoC. Now, that doesn’t preclude them from failing to say all there is to say about a given subject but what they do say should be correct. Better said; must be correct. In a lot of ways this is one of the primary responsibilities of a UK PM.

A PM, or Government Minister found lying to Parliament is committing a significant offence and carries the likelihood of suspension. It’s not a trivial matter, neither should it be.

In public, as a campaigning conservative politician there are lots of cases where Boris Johnson has been casual with the truth. Britain’s exit from the European Union (EU) was driven by a cacophony of factual falsifications and gross distortions of the truth. Boris and Brexit are synonymous.

A HoC committee will decide on the facts surrounding the downfall of former British PM Boris Johnson. His peers, as members of a privileges committee will make a statement on his behaviour in coming weeks. With all the evidence in the public domain now, it seems probable that the committee will find that Johnson was at least reckless, if not that he intentionally lied in the HoC chamber, fellow Members of Parliament and the country.

Although, it would be unwise to discount Johnson’s political comeback, one day, there may be a chance that his style of politics will be shown to be as damaging as we know it to be. This should be a turning point where accountability wins out over bluster and fibs. Let’s hope it is.

[1] Alexander Boris de Pfeffel Johnson is the politician, writer and journalist who was Prime Minister of the UK and Leader of the Conservative Party from 2019 to 2022.