Artificial Intelligence is on the cusp of transforming our world in ways many of us can barely imagine. While there’s much excitement about emerging technologies, a new report by 26 of the world’s leading AI researchers warns of the potential dangers that could emerge over the coming decade, as AI systems begin to surpass levels of human performance.
Automated hacking is identified as one of the most imminent applications of AI, especially so-called “phishing” attacks.
“That part used to take a lot of human effort – you had to study your target, make a profile of them, craft a particular message – that’s known as phishing. We are now getting to the point where we can train computers to do the same thing. So you can model someone’s topics of interest or preferences, their writing style, the writing style of a close friend, and have a machine automatically create a message that looks a lot like something they would click on,” says report co-author Shahar Avin of the Center for the Study of Existential Risk at Britain’s University of Cambridge.
In an era of so-called “fake news,” the implications of AI for media and journalism are also profound.
Programmers from the University of Washington last year built an AI algorithm to create a video of Barack Obama, allowing them to program the “fake” former president to say anything they wished. It’s just the start, says Avin.
“You create videos and audio recordings that are pixel to pixel indistinguishable from real videos and real audio of people. We will need new technical measures. Maybe some kind of digital signatures, to be able to verify sources.”
There is much excitement over technology such as self-driving AI cars, with big tech companies alongside giant car makers vying to be the first to market. The systems, however, are only as secure as the environments in which they operate.
“You can have a car that is as good and better at navigating the world than your average driver. But you put some stickers on a ‘Stop’ sign and it thinks it’s ‘Go at 55 miles per hour.’ As long as we haven’t fixed that problem, we might have systems that are very safe, but are not secure. We could have a world filled with robotic systems that are very useful and very safe, but are also open to an attack by a malicious actor who knows what they are doing,” adds Avin.
The report warns that the proliferation of drones and other robotic systems could allow attackers “to deploy or re-purpose such systems for harmful ends, such as crashing fleets of autonomous vehicles, turning commercial drones into face-targeting missiles or holding critical infrastructure to ransom.”
He says AI use in warfare is widely seen as one of the most disturbing possibilities, with so-called ‘killer robots’ and decision-making taken out of the hands of humans.
“You want to have an edge over your opponent by deploying lots and lots of sensors, lots and lots of small robotic systems, all of them giving you terabytes of information about what’s happening on the battlefield. And no human would be in a position to aggregate that information, so you would start having decision recommendation systems. At this point, do you still have meaningful human control?”
There is also the danger of AI being used in mass surveillance, especially by oppressive regimes.
The researchers stress the many positive applications of AI; however, they note that it is a dual-use technology, and assert that AI researchers and engineers should be proactive about the potential for its misuse.
The authors say AI itself will likely provide many of the solutions to the problems they identify.