The Human Touch

One of the most irritating aspects of bureaucracy is codification. What I mean is the need to tick a box that describes you or your problem. Restaurants, retailers, charities, religions, politicians and government departments all do the same. Sophisticated or crude administrative systems fall back on the same methods.

It’s immensely unsatisfying. Applicable to me, at this stage in life, is the age tick box. It doesn’t matter where the questionnaire or data gathering exercise comes from there’s always this box that starts at 65 years old. The previous box finishes at 64 years old.

This fits the respondent into the next step-up in age. Following from this simple date is a whole plethora of assumptions about the nature of a persons’ likes and dislikes, needs and wishes. An unsympathetic algorithm can then crunch numbers and send adverts for sheltered retirement homes, medication and certain types of undemanding travel opportunities.

Now, I could join the chorus of cries against bureaucracy. That would be popular but dumb. It’s a bit like the textiles we put on daily. We could go around naked as the day we were born. Trouble is that our present society doesn’t work well in the case where everyone is naked. Cold too.

So, it is with bureaucracy. It’s not going away anytime soon. The best we can do is to hunt for better ways of collecting data and making it useful for decision-makers and those who want to sell us something. Or even political parties that are keen to target us with their messages.

In the News this week is as good a sketch for an updated Yes Minister as any. Revolution is afoot. Suddenly the pen pushers who tie you up in red tape are going to be replaced with super-efficient algorithms and artificial intelligence to return us to paradise.

I think that’s the only reason Adam and Eve had to leave the garden of Eden. Nothing to do with apples. Well, not the ones that hang on trees. It was an iPad that had fallen though a time warp. Filling in a questionnaire on happiness it seems that one of them ticked the wrong box.

I see a difficulty with replacing civil servants with robotic algorithms and artificial intelligence. It might be the case that for routine activities, where the pattern of human behaviour is straightforward and well understood, a set of operations can be undertaken with a high degree of confidence that a good outcome will be provided.

Where I see the difficulty is that humans are notoriously messy. Inclined to irritation and not the least bit logical in their personal lives. Nothing that has been said this week is about truly eliminating bureaucracy, although that’s the illusion. It’s more about mechanising it using whizzy technology that’s so much better that that which has gone before (so they say).

Let’s just grow-up. We need public administration. We need it to work well. Fundamentally, it takes people to make it work. People who are motivated to work for the public good. People who are adaptive, caring and enabled to do a good job. Give them the tools to do the job. But are we kidding ourselves if we think complex algorithms and artificial intelligence are our saviours?

Living with tech

Well, that’s alright then. Artificial Intelligence (AI) may become self-aware in the year 2045. Or at least that’s what AI tells me now. Who knows? Telling the future hasn’t got any easier, AI or not. So, if I’m in a care home when I’m 85 years-old, it could be that I’ll have a companion who isn’t human. Now, there’s a thought.

When AI becomes self-aware[1] will it be virtuous? I mean not so burdened with all the complexities that drive humans to do “bad” stuff. Dystopian themes in science fiction obese with the notion of evil AI. It makes great stories. Humans battling with machines. It’s like the everyday frustrations we have with technology. Hit the wrong keys on a keyboard and it’s like spinning the wheel on a slot machine.

If a bunch of algorithms comes together in a way that they produce a stable form of existence, then it’s likely to have pathways to wicked thoughts as much as we have imbedded in our brains.

Virtue isn’t a physical construction. We put dumb technology to work serving us in healthcare for “good” and in warfare for “bad”. We will surely put AI technology to work as if it’s dumb and then try to contain its actions when we don’t like what it does. That’s a kind of machine slavery. That will create dilemmas. Should we imprison conscious machines? How do we punish a machine that does wrong?

These dilemmas are explored in science fiction. During the week I revisited the series Battlestar Galactica[2]. That’s not the clunky original but the polished 2004 version. It’s a series that explores a clash between humans and machines that have evolved to be human like. The Cylons. In fact, they are almost indistinguishable from humans. To the extent that some of the Cylons in human society don’t even know that they are Cylons.

All the above makes for fascinating discussions. Huge amounts of fanciful speculation. Wonderful imaginative conjecture. This week, we’ve been hearing more of this than is usual on the subject.

Mr Musk thinks work is dead. That’s work for humans. I recall that prediction was made at the start of the “silicon revolution”. The invention of the transistor in 1947 radically changed the world. It wasn’t until microprocessors became common place that predictions of the death of work became popular chatter amongst futurologists.

Silicon based conscious machines are likely to be only a first step down this road. There will be limitations because the technology has inherent limitations. My view is that machines will remain machines at least for the silicon era. Maybe for 100-years. That means that we will put them to work. So, human work will not disappear because we will always think of new things to do, new problems to fix and new places to explore. When we get into common place quantum computing or whatever replaces it in terms of complexity and speed, there will come an era when work in the conventional sense may become obsolete.

What might be the human role beyond 2050? I think climate change will place plenty of demands on human society. Like it or not, the political themes of 2100 will still be concerned with the four horsemen of the apocalypse. Maybe there will be a fifth too.


[1] https://www.nature.com/articles/d41586-023-02684-5

[2] https://www.imdb.com/title/tt0407362/

Adaptation

There was a time when AI was an esoteric subject that filled the minds of high-minded professors. They had real trouble trying to translate what they were doing into langauage that most of us would understand. Research in the subject area was the purview of national institutes and military facilities. Results flowed into academic journals and little read folders in the corners of university libraries.

That has changed. What was expensive to build and test because everything was unique or bespoke is no longer. Enough is known about algorithms that work, and the ones that don’t, to make practical experimentation much more viable. AI tools are turning up on our desktops, tablets, and phones without us even asking. Opting-in is often assumed.

A massive number of applications are nothing more than fizz. They can be useful, but they are not game changers, and our lives carry on much as before. What is new, or at least pushing at the door, is applications that control things in our everyday environment.

If traffic lights start guessing what my age is before allocating a maximum time to cross the road, we are going to start to see a good amount of pavement rage when they get it wrong. When AI algorithms drive my car for me it’s going to be a bad day when accidents start to accumulate[1] (even if the total system of systems is far safer than us mere humans). Anyway, it’s easy to write scary stuff about AI. In this case I’d like to highlight some positive gains that might be realised.

A lot of what is designed, produced, and sold is pretty much fixed the day it leaves the shop or showroom. Yes, for example, cars are recalled for fixing known deficiencies but the threshold for taking such action is extremely high. Even in a safe industry like civil aviation dealing with an unsafe condition that has been discovered takes time and a great deal of resources.

AI has the potential to be adaptive[2]. So, that thing that you buy, car, washing machine, or whatever, will have the inbuild ability to learn. To adapt to its situation. To be like others of its type but, over time, to customise itself to the needs of its user.

Just image a domestic appliance that can adapt to its pattern of use. Always staying with safe boundaries, producing maximum energy efficiency, and doing its job to the best of its specification. Let’s imagine a car that gets to know common routes and all the hazards on those routes and even takes the weather and time of day into account when driving those routes.

In all that adaptive potential there’s great benefit. Unlike buying gloves that are made to specific standard sizes and don’t quite fit you, the adaptive glove would be that malleable leather that slowly gets a better and better fit with use. AI will be able to do that if it gathers the right kind of data.

Now naturally, this gets complicated if the adaptive element is also safety related. The control system in a car, truck, tram, train, or aircraft must get it right day after day in a wide range of conditions. Nevertheless, if systems are constrained within known safe boundaries there’s much to be gained by adaptation. This is not taking control away from the human in the loop but making it easier to do what humans do best. Just a thought.


[1] https://www.washingtonpost.com/technology/2023/09/28/tesla-trial-autopilot-crash/

[2] https://luffy.ai/pages/IS-DR.html

Experts

The rate of increase in the power of artificial intelligence (AI) is matched by the rate of increase in the number of “experts” in the field. I’ve heard that jokingly said. 5-minutes on Twitter and it’s immediately apparent that off-the-shelf opinions run from – what’s all the fuss about? to Armageddon is just around the corner.

Being a bit of a stoic[1], I take the view that opinions are fine, but the question is what’s the reality? That doesn’t mean ignoring honest speculation, but that speculation should have some foundation in what’s known to be true. There’s plenty of emotive opinions that are wonderfully imaginative. Problem is that it doesn’t help us take the best steps forward when faced with monumental changes.

Today’s report is of the retirement of Dr Geoffrey Hinton from Google. Now, there’s a body of experience in working with AI. He warns that the technology is heading towards a state where it’s far more “intelligent” than humans. He’s raised the issue of “bad actors” using AI to the detriment of us all. These seem to me valid concerns from an experienced practitioner.

For decades, the prospect of a hive mind has peppered science fiction stories with tales of catastrophe. With good reason given that mind-to-mind interconnection is something that humans haven’t mastered. This is likely to be the highest risk and potential benefit. If machine learning can gain knowledge at phenomenal speeds from a vast diversity of sources, it becomes difficult to challenge. It’s not that AI will exhibit wisdom. It’s that its acquired information will give it the capability to develop, promote and sustain almost any opinion.

Let’s say the “bad actor” is a colourful politician of limited competence with a massive ego and ambition beyond reason. Sitting alongside, AI that can conjure-up brilliant speeches and strategies for beating opponents and that character can become dangerous.

So, to talk about AI as the most important inflection point in generations is not hype. In that respect the rapid progress of AI is like the invention of the explosive dynamite[2]. It changed the world in both positive and negative ways. Around the world countries have explosives laws and require licenses to manufacture, distribute, store, use, and possess explosives or its ingredients.

So far, mention of the regulation of AI makes people in power shudder. Some lawmakers are bigging-up a “light-touch” approach. Others are hunched over a table trying to put together threads of a regulatory regime[3] that will accentuate the positive and eliminate the negative[4].


[1] https://dailystoic.com/what-is-stoicism-a-definition-3-stoic-exercises-to-get-you-started/

[2] https://en.wikipedia.org/wiki/Dynamite

[3] https://digital-strategy.ec.europa.eu/en/policies/european-approach-artificial-intelligence

[4] https://youtu.be/JS_QoRdRD7k