Protest

Any study about “change” will tell you that it’s not easy. Take a few of the big social transformations that have occurred over the last six decades. I can’t point to one that just happened without a campaign or fight. Social and political change comes when momentum has built-up. Pressure is needed. Often that pressure comes in the form of protest and extensive campaigning in public.

As ever faster digital connections are becoming universal, it’s still possible to buy physical digital media. Charity shops have piles and piles of CDs and DVDs as people off-load the stuff that clutters their shelves. It’s remarkable that yesterday’s whizzy new thing has become a historic artefact so quickly[1]. In 40-years, the optical digital disk has risen and then faded into the background.

I picked up a little bit of social history in a Red Cross charity shop. It’s a series of 3 DVDs that captures a slice of the career of the well-known journalist and broadcaster Alan Whicker. Stretching over six decades of travelling around the globe it’s a great watch. The series is called “Journey of a Lifetime” and was published in 2009[2]. Although, there’s plenty that dates Whicker’s documentary style there’s no doubt that his ability to quickly summing up big changes is a masterclass.

That straightforward diction and incongruous club jacket became a trademark. It gave him a neutral camouflage so he could talk eye-to-eye with hippies, dictators, evangelists, social campaigners, film stars and dubious gurus. That’s what created so many revealing conversations that are now time stamped as emblematic of an era. I recommend viewing the Whicker’s reflections on six decades of social history. It’s a great reminder of where we have been and how learning the lessons of the past is so difficult.

Back to my initial subject – change. It’s easy to say that it’s inevitable and unrelenting but its nature is less easy to discern. Change undulates. We go forwards then we go backwards in differing amounts.

I have a theory that our social progression can be plotted like an inclined wood saw. Yes, I know. It’s the engineer in me. Look at the shape of the saw’s teeth. They go forwards, and then quickly go backwards but they always go backwards less than they go forwards. That’s how a saw’s teath cut.

This is one of my abstrat reasons why the UK Government’s most recent laws to supress public protest are as stupid as political debate can get. Resisting change is nothing new. After all, the word “conservative” has a simple commonplace meaning. When all else fails, the basic political instinct to push out laws that comfort supporters is built in. As a direction for a whole country to take, this way of working is foolish and naive.

Locking up climate change protestors is not going to fix climate change. Locking up protestors against sewage on beaches and in rivers not going to fix greedy water companies. Locking up republican protestors is not going to fix the decline in public support for the monarchy.

Using the pretext that – this is what the public want – as a cover for these policies is to show the vacuum that conservative political thinking is thrashing around in. Sadly, as I’ve said, reflection on the last six decades of conservative thinking shows regressive tendencies in abundance.


[1] https://www.bbc.co.uk/archive/optical-storage-technology/zv7bpg8

[2] https://en.wikipedia.org/wiki/Whicker%27s_World

Experts

The rate of increase in the power of artificial intelligence (AI) is matched by the rate of increase in the number of “experts” in the field. I’ve heard that jokingly said. 5-minutes on Twitter and it’s immediately apparent that off-the-shelf opinions run from – what’s all the fuss about? to Armageddon is just around the corner.

Being a bit of a stoic[1], I take the view that opinions are fine, but the question is what’s the reality? That doesn’t mean ignoring honest speculation, but that speculation should have some foundation in what’s known to be true. There’s plenty of emotive opinions that are wonderfully imaginative. Problem is that it doesn’t help us take the best steps forward when faced with monumental changes.

Today’s report is of the retirement of Dr Geoffrey Hinton from Google. Now, there’s a body of experience in working with AI. He warns that the technology is heading towards a state where it’s far more “intelligent” than humans. He’s raised the issue of “bad actors” using AI to the detriment of us all. These seem to me valid concerns from an experienced practitioner.

For decades, the prospect of a hive mind has peppered science fiction stories with tales of catastrophe. With good reason given that mind-to-mind interconnection is something that humans haven’t mastered. This is likely to be the highest risk and potential benefit. If machine learning can gain knowledge at phenomenal speeds from a vast diversity of sources, it becomes difficult to challenge. It’s not that AI will exhibit wisdom. It’s that its acquired information will give it the capability to develop, promote and sustain almost any opinion.

Let’s say the “bad actor” is a colourful politician of limited competence with a massive ego and ambition beyond reason. Sitting alongside, AI that can conjure-up brilliant speeches and strategies for beating opponents and that character can become dangerous.

So, to talk about AI as the most important inflection point in generations is not hype. In that respect the rapid progress of AI is like the invention of the explosive dynamite[2]. It changed the world in both positive and negative ways. Around the world countries have explosives laws and require licenses to manufacture, distribute, store, use, and possess explosives or its ingredients.

So far, mention of the regulation of AI makes people in power shudder. Some lawmakers are bigging-up a “light-touch” approach. Others are hunched over a table trying to put together threads of a regulatory regime[3] that will accentuate the positive and eliminate the negative[4].


[1] https://dailystoic.com/what-is-stoicism-a-definition-3-stoic-exercises-to-get-you-started/

[2] https://en.wikipedia.org/wiki/Dynamite

[3] https://digital-strategy.ec.europa.eu/en/policies/european-approach-artificial-intelligence

[4] https://youtu.be/JS_QoRdRD7k

Light touch folly

Light touch regulation. Now, there’s a senseless folly. It’s a green light to bad actors wherever they operate. It’s like building a medieval castle’s walls half as thick as planned to save money in the belief that enemies are too stupid to work it out. Saying that the public good far less important than the speed of developments is unwise to say the least.

The INTERNET arrived in the UK in the late 1980s. Now, it seems strange to recount. Clunky Personal Commuters (PCs) and basic e-mail were the hight of sophistication as we moved from an office of typewriters and Tipp-Ex to the simple word processor[1]. Generations will marvel at the primitive nature of our former working lives. Getting scissors and cutting out paper text and pasting it into a better place in a draft document. Tippexing out errors and scribbling notes in the spaces between sentences. Yet, that’s what we did when first certifying many of the commercial airliners in regular use across the globe (Boeing 777. Airbus A320). Desktop computers took centre stage early in the 1990s, but administrations were amid a transition. Clickable icons hit screens in 1990. Gradually and progressively new ways of working evolved.

Microsoft Windows 95 and the INTERNET were heralded as the dawn of a new age. Not much thought was given to PCs being used for criminal or malicious purposes. No more thought than the use of a typewriter to commit crime. That doesn’t mean such considerations were ignored it just means that they were deemed a lower-level importance.

In 2023, everyday there’s a new warning about scammers. Even fake warnings about scammers coming from scammers with the aim of scamming. Identifying whose real and whose a fake is becoming ever more difficult. Being asked to update subscriptions that were never opened in the first places is a good indicator that there’s some dirty work afoot. Notices that accounts are about to be blocked referring to accounts that don’t exist is another.

In 30-years the INTERNET has taken on the good and bad of the greater world. It hasn’t become a safer place. In fact, it’s become a bit like the Wild West[2].

Our digital space continues to evolve but has nowhere near reached its potential. It’s like those great western plains where waggons headed out looking for rich new lands. In any towns on the way the shop fronts are gleaming and inviting but if you look around the back there’s a desperate attempt to keep bad actors at bay.

Only a fraction of the suspicious, emails, texts, and messages get reported. People unconsciously pile up a digital legacy and rarely clean out the trash that accumulates. A rich messiness of personal information can lie hidden to the eyes but just bellow the digital surface.

When politicians and technocrats talk of “light touch regulation” it’s as if none of this matters. In the race to be first in technology, public protection is given a light touch. This can’t be a good way to go.


[1] Still available – Tipp-Ex Rapid, Correction Fluid Bottle, High Quality Correction Fluid, Excellent Coverage, 20ml, Pack of 3, white.

[2] https://en.wikipedia.org/wiki/American_frontier

Pointless Brexit

Democracy’s malleable frame. I don’t recall the people of the UK being given a referendum on joining a trade block in the Pacific. Nice thou it is to have good relations with trading nations across the globe it seems strange that the other side of the world is seen as good and next door is seen as bad. It’s like a person looking through a telescope through the wrong end.

Back on 23rd June 2016, voters in the UK were asked if Britain should leave the EU. No one really knew what “leave” meant as all sorts of, what now turns out to be blatant lies were told to the public. The words “customs union” were not spoken in 2016. If they were it was in a tone of – don’t worry about all that, we hold all the cards, nothing will change.

Today, UK sectors from fishing to aviation, farming to science report being bogged down in ever more red tape, struggling to recruit staff, and racking up losses. Sure, Brexit is not the only trouble in the world, but it was avoidable unlike the pandemic and Putin’s war.

We (UK) became a country that imposed sanctions on itself. A unique situation in Europe. If some people are surprised, we have significant problems the really ought to examine what happened in 2016. It’s a textbook example of how not to do thing. The events will probably be taught in schools and universities for generations to come as a case of national self-harm.

Democracy is invaluable but when a government dilutes a massive question into a simple YES or NO, they dilute democracy too. It’s the territory that demigods thrive in. Mainly because this approach encourages the polarisation that then drives ever more outlandish claims about opponents. The truth gets buried under a hail of campaign propaganda, prejudice, and misinformation.

What Brexit has stimulated. A growth sector, I might say. Is the blame game. Now, when things go wrong, UK politicians can always blame those across the other side of the Channel. Standing on the cliffs in Dover its easy to survey the mess and point a finger out to sea.

If some people’s motivation for voting for Brexit was to control borders and stopping immigration the failures are so obvious that they hardly need to be pointed out. Yet, politicians persist with they myth that a solution is just around the corner if only UK laws were made ever more draconian. A heavier hand, criminalisation and the blame game are not solutions. These acts will merely continue the round of calamities and failures.

Brexit has unlocked a grand scale of idiocy. The solution is to consign this dogma to the past.

Who’s in control?

The subject of artificial intelligence (AI) in an aircraft cockpit stirs-up reactions that are both passionate and pragmatic. Maybe, it’s a Marmite issue[1]. Mention of the subject triggers an instant judgement. 

Large passenger transport civil aircraft are flown by two human operators. Decisions are made by those two human operators. They are trained and acquire experience doing the job of flying. A word that has its origins in the marine world is used to describe their role – pilot.

One of my roles, early on in my career, was to lead the integration of a cockpit display system into a large new helicopter[2]. New, at the time. The design team, I was part of comprised of people with two different professional backgrounds. One had an engineering background, like me, and the other had qualification associated with psychology. The recognition that an aircraft cockpit is where the human and machine meet is not new. A lot of work was done in simulation with flight crews. 

The first generation of jet aircraft put the pilot in full-time command. It’s as we moved from purely mechanical interactions with aircraft, the balance of flight control has been shared between pilot and aircraft systems. There’s no doubt, in the numbers, that this has improved aviation safety.

Nobody is calling for the removal of aircraft autopilot systems. Much of the role of the formerly required flight engineer has been integrated into the aircraft systems. Information is compressed and summarised on flat screen displays in the aircraft cockpit.

Today, AI is not just one thing. There’s a myriad of different types and configurations, some of which are frozen and some of which are constantly changing as they learn and grow. That said, a flawless machine is a myth. Now, that’s a brave statement. We are generations away from a world where sentient machines produce ever better machines. It’s the stuff of SiFi.

As we have tried to make ever more capable machines, failures are a normal part of evolution. Those cycles of attempts and failures will need to lead into the billions and billions before human capabilities are fully matched. Yes, I know that’s an assertion, but it has taken humans more than a million years to get to have this discussion. That’s with our incredible brains.

What AI can do well is to enhance human capabilities[3]. Let’s say, of all the billions of combinations and permutations, an aircraft in flight can experience, a failure that is not expected, not trained, and not easily understood occurs. This is where the benefits and speed of AI can add a lot. Aircraft system using AI should be able to consider a massive number of potential scenarios and provide a selection of viable options to a flight crew. In time critical events AI can help.

The road where AI replaces a pilot in the cockpit is a dead end. The road where AI helps a pilot in managing a flight is well worth pursuing. Don’t set the goal at replacing humans. Set the goal at maximising the unique qualities of human capabilities.


[1] https://www.macmillandictionary.com/dictionary/british/marmite_2

[2] https://en.wikipedia.org/wiki/AgustaWestland_AW101

[3] https://hbr.org/2021/03/ai-should-augment-human-intelligence-not-replace-it

Pause

An open letter has been published[1]. Not for the first time. It asks those working on Artificial Intelligence (AI) to take a deep breath and pause their work. It’s signed by AI experts and interested parties, like Elon Musk. This is a reaction to the competitive race to launch ever more powerful AI[2]. For all technology launches, it’s taking fewer and fewer years to get to a billion users. If the subject was genetic manipulation the case for a cautious step-by-step approach would be easily understood. However, the digital world, and its impact on our society’s organisation isn’t viewed as important as genetics. Genetically Modified (GM) crops got people excited and anxious. An artificially modified social and political landscape doesn’t seem to concern people quite so much. It maybe, the basis for this ambivalence is a false view that we are more in control of one as opposed to the other. It’s more likely this ambivalence stems from a lack of knowledge. One response to the open letter[3] I saw was thus: A lot of fearmongering luddites here! People were making similar comments about the pocket calculator at one time! This is to totally misunderstand what is going on with the rapid advance of AI. I think, the impact on society of the proliferation of AI will be greater than that of the invention of the internet. It will change the way we work, rest and play. It will do it at remarkable speed. We face an unprecedented challenge. I’m not for one moment advocating a regulatory regime that is driven by societal puritans. The open letter is not proposing a ban. What’s needed is a regulatory regime that can moderate aggressive advances so that knowledge can be acquired about the impacts of AI. Yesterday, a government policy was launched in the UK. The problem with saying that there will be no new regulators and regulators will need to act within existing powers is obvious. It’s a diversion of resources away from exiting priorities to address challenging new priorities. That, in of itself is not an original regulatory dilemma. It could be said, that’s why we have sewage pouring into rivers up and down the UK. In an interview, Conservative Minister Paul Scully MP mentioned sandboxing as a means of complying with policy. This is to create a “safe space” to try out a new AI system before launching it on the world. It’s a method of testing and trials that is useful to gain an understanding of conventional complex systems. The reason this is not easily workable for AI is that it’s not possible to build enough confidence that AI will be safe, secure and perform its intended function without running it live. For useful AI systems, even the slightest change in the start-up conditions or training can produce drastically different outcomes. A live AI system can be like shifting sand. It will build up a structure to solve problems, and do it well, but the characteristics of its internal workings will vary significantly from one similar system to another. Thus, the AI system’s workings, as they are run through a sandbox exercise may be unlike the same system’s workings running live. Which leads to the question – what confidence can a regulator, with an approval authority, have in a sandbox version of an AI system? Pause. Count to ten and work out what impacts we must avoid. And how to do it.

Policy & AI

Today, the UK Government published an approach to Artificial Intelligence (AI)[1]. It’s in the form of a white paper. That’s a policy document creäte by the Government that sets out their proposals for future legislation.

This is a big step. Artificial Intelligence (AI) attracts both optimism and pessimism. Utopia and dystopia. There are a lot more people who sit in these opposing camps as there are who sit in the middle. It’s big. Unlike any technology that has been introduce to the whole populous.

On Friday last, I caught the film iRobot (2004)[2] showing early evening on Film 4. It’s difficult to believe this science fiction is nearly 20-years old and the short story of Isaac Asimov’s, on which it’s based is from the 1950s. AI is a fertile space for the imagination to range over a vast space.

Fictional speculation about AI has veered towards the dystopian end of the scale. Although that’s not the whole story by far. One example of good AI is the sentient android in the Star Trek universe. The android “Data” based on the USS Enterprise, strives to help humanity and be more like us. His attempt to understand human emotions are often significant plot points. He’s a useful counterpoint to evil alien intelligent machines that predictably aim to destroy us all.

Where fiction helps is to give an airing to lots of potential scenarios for the future. That’s not trivial. Policy on this rapidly advancing subject should not be narrowly based or dogmatic.

Where there isn’t a great debate is the high-level objectives that society should endeavour to achieve. We want technology to do no harm. We want technology to be trustworthy. We want technology to be understandable.

Yet, we know from experience, that meeting these objectives is much harder than asserting them. Politicians love to assert. In the practical world, it’s public regulators who will have to wrestle with the ambitions of industry, unforeseen outcomes, and negative public reactions.

Using the words “world leading” successively is no substitute for resourcing regulators to beef-up their capabilities when faced with rapid change. Vague and superficial speeches are fine in context. Afterall, there’s a job to be done maintaining public confidence in this revolutionary technology.

What’s evident is that we should not delude ourselves. This technical transformation is unlike any we have so far encountered. It’s radical nature and speed mean that even when Government and industry work together they are still going to be behind the curve.

As a fictional speculation an intelligent android who serves as a senior officer aboard a star ship is old school. Now, I wonder what we would make of an intelligent android standing for election and becoming a Member of Parliament?


[1] The UK’s AI Regulation white paper will be published on Wednesday, 29 March 2023. Organisations and individuals involved in the AI sector will be encouraged to provide feedback on the white paper through a consultation which launches today and will run until Tuesday, 21 June 2023.

[2] https://en.wikipedia.org/wiki/I,_Robot_(film)

Digital toxicity

There’s a tendency to downplay the negative aspects of the digital transition that’s happening at pace. Perhaps it’s the acceptance of the inevitability of change and only hushed voices of objection.

A couple of simple changes struck me this week. One was my bank automatically moving me to an on-line statement and the other was a news story about local authorities removing pay machines from car parks on the assumption everyone has a mobile phone.

With these changes there’s a high likelihood that difficulties are going to be caused for a few people. Clearly, the calculation of the banks and local authorities is that the majority rules. Exclusion isn’t their greatest concern but saving money is high on their list of priorities.

The above aside, my intention was to write about more general toxic impacts of the fast-moving digital transition. Now, please don’t get me wrong. In most situations such a transition has widespread benefits. What’s of concern is the few mitigations for any downsides.

Let’s list a few negatives that may need more attention.

Addiction. With social media this is unquestionable[1]. Afterall digital algorithms are developed to get people engaged and keep them engaged for as long as possible. It’s the business model that brings in advertising revenues. There’s FOMO too. That’s a fear of missing out on something new or novel that others might see but you might miss out on.

Attention. Rapidly stroking a touch screen to move from image to image, or video to video encourages less attention to be given to any one piece of information. What research there is shows a general decline in the attention span[2] as a characteristic of being subject to increasing amounts of information, easily made available.

Adoration. Given that so many digital functions are provided with astonishing accuracy, availability, and speed there’s a natural inclination to trust their output. When that trust is justifiable for a high percentage of the time, the few times information is in error can easily be ignored or missed. This can lead to people defending or supporting information that is wrong[3] or misleading.

It’s reasonable to say there are downsides with any use of technology. That said, it’s as well to try to mitigate those that are known about and understood. The big problem is the cumulative effect of the downsides. This can increase fragility and vulnerability of the systems that we all depend upon.

If digital algorithms were medicines or drugs, there would be a whole array of tests conducted before their public release. Some would be strongly regulated. I’m not saying that’s the way to go but it’s a sobering thought.


[1] https://www.theguardian.com/global/2021/aug/22/how-digital-media-turned-us-all-into-dopamine-addicts-and-what-we-can-do-to-break-the-cycle

[2] https://www.kcl.ac.uk/news/are-attention-spans-really-collapsing-data-shows-uk-public-are-worried-but-also-see-benefits-from-technology

[3] https://www.bbc.co.uk/news/business-56718036

Comms

The long history of data communications between air and ground has had numerous stops and starts. It’s not new to use digital communications while flying around the globe. That said, it has not been cheap, and traditional systems have evolved only slowly. If we think Controller Pilot Data Link Communications (CPDLC)[1] is quite whizzy. It’s not. It belongs to a Windows 95 generation. Clunky messages and limited applications.

The sluggishness of adoption of digital communications in commercial aviation has been for several reasons. For one, standardised, certified, and maintainable systems and equipment have been expensive. It’s not just the purchase and installation but the connection charges that mount-up.

Unsurprisingly, aircraft operators have moved cautiously unless they can identify an income stream to be developed from airborne communication. That’s one reason why the passengers accessing the internet from their seats can have better connections than the two-crew in the cockpit.

Larger nations’ military flyers don’t have a problem spending money on airborne networking. For them it’s an integral part of being able to operate effectively. In the civil world, each part of the aviation system must make an economic contribution or be essential to safety to make the cut.

The regulatory material applicable to Airborne Communications, Navigation and Surveillance (CS-ACNS)[2] can be found in publications coming from the aviation authorities. This material has the purpose of ensuring a high level of safety and aircraft interoperability. Much of this generally applicable material has evolved slowly over the last 30-years.

Now, it’s good to ask – is this collection of legacy aviation system going to be changed by the new technologies that are rapidly coming on-stream this year? Or are the current mandatory equipage requirements likely to stay the same but be greatly enhanced by cheaper, faster, and lower latency digital connections?

This year, Starlink[3] is offering high-speed, in-flight internet connections with global connectivity. This company is not the only one developing Low Earth Orbit (LEO)[4] satellite communications. There are technical questions to be asked in respect of safety, performance, and interoperability but it’s a good bet that these new services will very capable and what’s more, not so expensive[5].

It’s time for airborne communications to step into the internet age.

NOTE: The author was a part of the EUROCAE/RTCA Special Committee 169 that created Minimum Operational Performance Standards for ATC Two-Way Data Link Communications back in the 1990s.

POST 1: Elon Musk’s Starlink Internet Service Coming to US Airlines; Free WiFi (businessinsider.com)

POST 2: With the mandate of VDLM2 we evolve at the pace of a snail. Internet Protocol (IP) Data Link may not be suitable for all uses but there’s a lot more that can be done.


[1] https://skybrary.aero/articles/controller-pilot-data-link-communications-cpdlc

[2] https://www.easa.europa.eu/en/document-library/easy-access-rules/easy-access-rules-airborne-communications-navigation-and

[3] https://www.starlink.com/

[4] https://www.esa.int/ESA_Multimedia/Images/2020/03/Low_Earth_orbit

[5] https://arstechnica.com/information-technology/2022/10/starlink-unveils-airplane-service-musk-says-its-like-using-internet-at-home/

Small Boats

Are there really hundred million people coming to Britain? Or is this a desperate scare tactic adopted by a Conservative Minister who has run out of workable ideas? It’s certainly the sort of tabloid headline that a lot of conservative supporters like to read. As we saw in the US, with former President Trump’s rhetoric on building a wall these themes stir-up negative emotions and prejudice. It’s a way of dividing people.

Xenophobia is defined as a fear and hatred of strangers or foreigners or of anything that is strange or foreign. With nearly 8 billion people on Earth[1] the potential for this destructive fear to be exploited has never been greater. Here, the Conservative Party is increasingly dominated by xenophobia and demagoguery, whatever a change of leadership may be trying to cover-up.

Will Parliamentary debate save us from the worst instincts highlighted in the Government’s latest proposals on small boat crossings? That’s a big question when the ruling political party has such a large parliamentary majority. Debate is likly to be heated and lacking objectivity.

Pushing the boundaries of international law can cause reputational damage, even if these rum proposals are defeated. However, what concerns most commentators is the high likelihood that the proposed measure will not work. They are merely a more extreme version of past failed policies.

One of the poorest political arguments is to criticise an opponent for reasoned opposition. It goes like this: here’s my policy and by opposing, it without providing your policy, you automatically make my policy a good one. It’s like planning to build a dangerously rickety bridge, likely to fail, and pointing to those who criticise the project as a reason why it’s a good to project.

When spelt out, like this it’s clear how curiously subversive this shoddy bombast can be. However, one of the basic party-political instincts, to seek headlines and publicity, has overridden common sense in this case. In the Government’s case, legislating regardless of the consequences, is an act of political desperation. Sadly, that’s where we are in this pre-election period.

NOTE: In June 2022, the UK had a prison population of roughly 89,520 people. The detention facilities needed to enable the Government’s small boats policy would need to be in the region of 40,000 people. Yet, there’s no published plan for a significant expansion of detention facilities. 


[1] https://www.census.gov/popclock/world