There’s much that has been written on this subject. In fact, for a non-specialist observer it’s not so easy to get to grips with the different predictions and views that are buzzing around.
There’s absolutely no doubt that Artificial intelligence (AI) will change every corner of society. Maybe a few living off-grid in remote areas will remain untouched but every other human on the planet will be impacted by AI. Where there’s digital data there will be AI. Some will say this brings the benefits of AI into our everyday and others herald a pending nightmare where we lose control.
Neither maybe totally on the money but what’s clear is that this is no ordinary technological transition. Up until now, the software we use has been a tool. Built for a purpose and shaped by those who programmed its code. AI is not like that at all. It’s a step beyond just a tool.
Imagine wheeling a hammer that changed shape to suite a job, but the user had no control over the shape it took. How will we take to something so useful but beyond our immediate control?
In civil aviation, AI opens the possibility of autonomous flight, preventive maintenance, and optimal air traffic management. It may work with human operators or replace them in its more advanced future implementations. Even the thought of this causes some professional people to recoil.
I’ve just finished reading the book[1] of a former Google chief officer, Mo Gawdat and he starts off being pessimistic about the dangers of widespread general AI. As he moves through his arguments, the book points to us as the problem and not the machines. It’s what we teach AI that matters rather than the threat being intrinsic to the machine.
To me, that makes perfect sense. The notion of GIGO[2] or “Garbage In, Garbage Out” has been around as long as the computer. It does, however, put a big responsibility on those who provide the training data for AI or how that data is acquired.
Today’s social media gives us a glimpse of what happens when algorithms slavishly give us what we want. Anarchic public training from millions of hand-held devices can produce some undesirable and unpleasant outcomes.
It maybe that we need to move from a traditional software centric view of how these systems work to a more data centric view. If AI starts with poor training data, the outcome will be assuredly poor.
Gawdat dismisses the idea that general AI can be explainable. Whatever graphics or equations that may be contrived they are not going to give a useful representation of what goes on inside the machine after a period of running. An inability to explain the inner working of the AI maybe fine for non-critical applications but it’s a problem in relation to safety systems.
[1] Mo Gawdat. Scary Smart, the future of artificial intelligence and how you can save our world. 2021. ISBN 978-1-5290-7765-0.