An open letter has been published[1]. Not for the first time. It asks those working on Artificial Intelligence (AI) to take a deep breath and pause their work. It’s signed by AI experts and interested parties, like Elon Musk. This is a reaction to the competitive race to launch ever more powerful AI[2]. For all technology launches, it’s taking fewer and fewer years to get to a billion users. If the subject was genetic manipulation the case for a cautious step-by-step approach would be easily understood. However, the digital world, and its impact on our society’s organisation isn’t viewed as important as genetics. Genetically Modified (GM) crops got people excited and anxious. An artificially modified social and political landscape doesn’t seem to concern people quite so much. It maybe, the basis for this ambivalence is a false view that we are more in control of one as opposed to the other. It’s more likely this ambivalence stems from a lack of knowledge. One response to the open letter[3] I saw was thus: A lot of fearmongering luddites here! People were making similar comments about the pocket calculator at one time! This is to totally misunderstand what is going on with the rapid advance of AI. I think, the impact on society of the proliferation of AI will be greater than that of the invention of the internet. It will change the way we work, rest and play. It will do it at remarkable speed. We face an unprecedented challenge. I’m not for one moment advocating a regulatory regime that is driven by societal puritans. The open letter is not proposing a ban. What’s needed is a regulatory regime that can moderate aggressive advances so that knowledge can be acquired about the impacts of AI. Yesterday, a government policy was launched in the UK. The problem with saying that there will be no new regulators and regulators will need to act within existing powers is obvious. It’s a diversion of resources away from exiting priorities to address challenging new priorities. That, in of itself is not an original regulatory dilemma. It could be said, that’s why we have sewage pouring into rivers up and down the UK. In an interview, Conservative Minister Paul Scully MP mentioned sandboxing as a means of complying with policy. This is to create a “safe space” to try out a new AI system before launching it on the world. It’s a method of testing and trials that is useful to gain an understanding of conventional complex systems. The reason this is not easily workable for AI is that it’s not possible to build enough confidence that AI will be safe, secure and perform its intended function without running it live. For useful AI systems, even the slightest change in the start-up conditions or training can produce drastically different outcomes. A live AI system can be like shifting sand. It will build up a structure to solve problems, and do it well, but the characteristics of its internal workings will vary significantly from one similar system to another. Thus, the AI system’s workings, as they are run through a sandbox exercise may be unlike the same system’s workings running live. Which leads to the question – what confidence can a regulator, with an approval authority, have in a sandbox version of an AI system? Pause. Count to ten and work out what impacts we must avoid. And how to do it.
Tag: Policy
Policy & AI
Today, the UK Government published an approach to Artificial Intelligence (AI)[1]. It’s in the form of a white paper. That’s a policy document creäte by the Government that sets out their proposals for future legislation.
This is a big step. Artificial Intelligence (AI) attracts both optimism and pessimism. Utopia and dystopia. There are a lot more people who sit in these opposing camps as there are who sit in the middle. It’s big. Unlike any technology that has been introduce to the whole populous.
On Friday last, I caught the film iRobot (2004)[2] showing early evening on Film 4. It’s difficult to believe this science fiction is nearly 20-years old and the short story of Isaac Asimov’s, on which it’s based is from the 1950s. AI is a fertile space for the imagination to range over a vast space.
Fictional speculation about AI has veered towards the dystopian end of the scale. Although that’s not the whole story by far. One example of good AI is the sentient android in the Star Trek universe. The android “Data” based on the USS Enterprise, strives to help humanity and be more like us. His attempt to understand human emotions are often significant plot points. He’s a useful counterpoint to evil alien intelligent machines that predictably aim to destroy us all.
Where fiction helps is to give an airing to lots of potential scenarios for the future. That’s not trivial. Policy on this rapidly advancing subject should not be narrowly based or dogmatic.
Where there isn’t a great debate is the high-level objectives that society should endeavour to achieve. We want technology to do no harm. We want technology to be trustworthy. We want technology to be understandable.
Yet, we know from experience, that meeting these objectives is much harder than asserting them. Politicians love to assert. In the practical world, it’s public regulators who will have to wrestle with the ambitions of industry, unforeseen outcomes, and negative public reactions.
Using the words “world leading” successively is no substitute for resourcing regulators to beef-up their capabilities when faced with rapid change. Vague and superficial speeches are fine in context. Afterall, there’s a job to be done maintaining public confidence in this revolutionary technology.
What’s evident is that we should not delude ourselves. This technical transformation is unlike any we have so far encountered. It’s radical nature and speed mean that even when Government and industry work together they are still going to be behind the curve.
As a fictional speculation an intelligent android who serves as a senior officer aboard a star ship is old school. Now, I wonder what we would make of an intelligent android standing for election and becoming a Member of Parliament?
[1] The UK’s AI Regulation white paper will be published on Wednesday, 29 March 2023. Organisations and individuals involved in the AI sector will be encouraged to provide feedback on the white paper through a consultation which launches today and will run until Tuesday, 21 June 2023.