So, why might artificial intelligence (AI) be so dangerous in a free society?
Democracy depends upon information being available to voters. Ideally, this would be legal, decent, and honest information. All too often the letter of the law may be followed whilst shaping a message to maximise its appeal to potential supporters. Is it honest to leave out chunks of embarrassing information for the one nugget that makes a politician look good? We make our own judgement on that one. We make a judgement assuming that outright lying is a rare case.
During key elections news can travel fast and seemingly small events can be telescoped into major debacles. I’m reminded of the remark made by Prime Minister Gordon Brown[1] when he thought the media’s microphones were dead. In 2010, when an aide asked: What did she say? Gordon Brown was candid in his reply. It’s an occasion when the honest thoughts of a PM on the campaign trail popped into the public domain and livened up that election coverage considerably.
What’s concerning about AI[2] is that, in the hands of a “bad actor,” such events could be faked[3] extremely convincingly. Since the fast pace of election campaigning leaves never enough time for in-depth technical investigations there’s a chance that fake events can sway people before they are uncovered. The time between occurrence and discovery need only be a few days. Deep fakes are moving from amateur student pranks to the tools of propagandists.
Misinformation happens now, you might say. Well, yes it does, and we do need people to fact-check claims and counter claims on a regular basis. However, we still depend on simple techniques, like a reporter or member of the public asking a question. It’s rather a basic in format.
This leaves the door open for AI to be used to produce compelling fakes. Sometimes, all it needs is to inject or eliminate one word from a recording or live event. The accuracy and speed of complex algorithms to provide seamless continuity is new. It can be said that we are a cynical lot. For all the protest of fakery that a politician may make after an exposure there will be a plenty of people who will not accept any subsequent debunking.
My example is but a simple one. There’s a whole plethora of possibilities when convincing fake pictures, audio and videos are only a couple of keyboard stokes away.
Regulatory intervention by lawmakers may not be easy but it does need some attention. In terms of printed media, that is election leaflets there are strict rules. Same with party political broadcasts.
Being realistic about the risks posed by technology is not to shut it down altogether. No, let’s accept that it will become part of our lives. At the same time, using that technology for corrupt purposes obviously needs to be stamped on. Regulatory intervention is a useful way of addressing heightened risks. Some of our 19th century assumptions about democracy need a shake-up.
[1] https://www.independent.co.uk/news/uk/politics/bigotgate-gordon-brown-anniversary-gillian-duffy-transcript-full-read-1957274.html
[2] https://edition.cnn.com/2023/05/16/tech/sam-altman-openai-congress/index.html
[3] https://www.nytimes.com/2023/02/07/technology/artificial-intelligence-training-deepfake.html