An open letter has been published. Not for the first time. It asks those working on Artificial Intelligence (AI) to take a deep breath and pause their work. It’s signed by AI experts and interested parties, like Elon Musk. This is a reaction to the competitive race to launch ever more powerful AI. For all technology launches, it’s taking fewer and fewer years to get to a billion users. If the subject was genetic manipulation the case for a cautious step-by-step approach would be easily understood. However, the digital world, and its impact on our society’s organisation isn’t viewed as important as genetics. Genetically Modified (GM) crops got people excited and anxious. An artificially modified social and political landscape doesn’t seem to concern people quite so much. It maybe, the basis for this ambivalence is a false view that we are more in control of one as opposed to the other. It’s more likely this ambivalence stems from a lack of knowledge. One response to the open letter I saw was thus: A lot of fearmongering luddites here! People were making similar comments about the pocket calculator at one time! This is to totally misunderstand what is going on with the rapid advance of AI. I think, the impact on society of the proliferation of AI will be greater than that of the invention of the internet. It will change the way we work, rest and play. It will do it at remarkable speed. We face an unprecedented challenge. I’m not for one moment advocating a regulatory regime that is driven by societal puritans. The open letter is not proposing a ban. What’s needed is a regulatory regime that can moderate aggressive advances so that knowledge can be acquired about the impacts of AI. Yesterday, a government policy was launched in the UK. The problem with saying that there will be no new regulators and regulators will need to act within existing powers is obvious. It’s a diversion of resources away from exiting priorities to address challenging new priorities. That, in of itself is not an original regulatory dilemma. It could be said, that’s why we have sewage pouring into rivers up and down the UK. In an interview, Conservative Minister Paul Scully MP mentioned sandboxing as a means of complying with policy. This is to create a “safe space” to try out a new AI system before launching it on the world. It’s a method of testing and trials that is useful to gain an understanding of conventional complex systems. The reason this is not easily workable for AI is that it’s not possible to build enough confidence that AI will be safe, secure and perform its intended function without running it live. For useful AI systems, even the slightest change in the start-up conditions or training can produce drastically different outcomes. A live AI system can be like shifting sand. It will build up a structure to solve problems, and do it well, but the characteristics of its internal workings will vary significantly from one similar system to another. Thus, the AI system’s workings, as they are run through a sandbox exercise may be unlike the same system’s workings running live. Which leads to the question – what confidence can a regulator, with an approval authority, have in a sandbox version of an AI system? Pause. Count to ten and work out what impacts we must avoid. And how to do it.