Half empty tool box

When new technologies come along there’s often a catch-up phase. Then we are either frightening ourselves crazy with a moral panic or switch to a – so what? – mode. The last week’s fury of articles on Artificial Intelligence (AI) probed all sorts of possibilities. What’s the enduring legacy of all that talk? Apart from stimulating our imaginations and coming up with some fascinating speculation, what’s going to happen next?

I’m struck by how conventional the response has been, at least from a governmental and regulatory point of view. A little bit more coordination here, a little bit more research there and maybe a new institution to keep an eye on whatever’s going on. Softly, softly as she goes. And I don’t mean the long-gone black and white British TV series of that name[1]. Although the pedestrian nature of the response would fit the series well.

Researchers and innovators are always several steps ahead of legislators and regulators. In addition, there’s the perception that the merest mention of regulation will slow progress and blunt competitiveness. Time and money spent satisfying regulators is considered a drain. However much some politicians think, the scales don’t always have public interest on one side and economic growth on the other.

Regarding AI more than most other rapidly advancing technical topics, we don’t know what we don’t know. That means more coordination turns into to more talk and more possibly groupthink about what’s happening. Believe you me, I’ve been there in the past with technical subjects. There’s a fearful reluctance to step outside contemporary comfort zones. This is often embedded in the terms of reference of working groups and the remit of regulators.

The result of the above is a persistent gap between what’s regulated in the public interest and what’s going on in the real world. A process of catch-up become permanently embedded.

One view of regulation is that there’s three equally important parts, at least in a temporal sense.

Reactive – investigate and fix problems, after the event. Pro-active – Using intelligence to act now. Prognostic – looking ahead in anticipation. Past, present, and future.

I may get predicable in what I say next. The first on the list is necessary, inevitable, and often a core activity. The second is becoming more commonplace. It’s facilitated by seeking data, preforming analysis and being enabled to act. The third is difficult. Having done the first two, it’s to use the best available expertise and knowledge to make forecasts, identify future risks and put in place measures ahead of time.

So, rather than getting a sense that all the available methods and techniques are going to be thrown at the challenge of AI, I see a vacuum emerging. Weak cooperation forums and the fragmentation inherent when each established regulator goes their own way, is almost a hands-off approach. There’s a tendency to follow events rather than shaping what happens next. Innovation friendly regulation can support emerging digital technologies, but it needs to take their risk seriously.


[1] https://www.imdb.com/title/tt0129717/

Unknown's avatar

Author: johnwvincent

Our man in Southern England

Leave a comment