BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

Weaponizing Artificial Intelligence: The Scary Prospect Of AI-Enabled Terrorism

Following
This article is more than 5 years old.

There has been much speculation about the power and dangers of artificial intelligence (AI), but it’s been primarily focused on what AI will do to our jobs in the very near future. Now, there’s discussion among tech leaders, governments and journalists about how artificial intelligence is making lethal autonomous weapons systems possible and what could transpire if this technology falls into the hands of a rogue state or terrorist organization. Debates on the moral and legal implications of autonomous weapons have begun and there are no easy answers.

Adobe Stock

Autonomous weapons already developed

The United Nations recently discussed the use of autonomous weapons and the possibility to institute an international ban on “killer robots.” This debate comes on the heels of more than 100 leaders from the artificial intelligence community, including Tesla’s Elon Musk and Alphabet’s Mustafa Suleyman, warning that these weapons could lead to a “third revolution in warfare.”

Although artificial intelligence has enabled improvements and efficiencies in many sectors of our economy from entertainment to transportation to healthcare, when it comes to weaponized machines being able to function without intervention from humans, a lot of questions are raised.

There are already a number of weapons systems with varying levels of human involvement that are actively being tested today.

In the UK, the Taranis drone, an unmanned combat aerial vehicle, is expected to be fully operational by 2030 and capable of replacing the human-piloted Tornado GR4 fighter planes that are part of the Royal Air Force’s Future Offensive Air System.

Other countries, including the United States and Russia, are developing robotic tanks that can operate autonomously or be remote controlled. The U.S. also has an autonomous warship that was launched in 2016. Although it’s still in development, it’s expected to have offensive capabilities including anti-submarine weaponry.

South Korea uses a Samsung SGR-A1 sentry gun that is supposedly capable of firing autonomously to police its border.

While these weapons were developed to minimize the threat to human life in military conflicts, you don’t need to be an avid Sci-Fi fan to make the leap to imagine how terrorist organizations can use these weapons for mass destruction.

Warnings of AI and killer robots

The United States and Chinese military are testing the use of swarming drones—dozens of unmanned aircraft that can be sent in to overwhelm enemy targets and can result in mass killings.

Alvin Wilby, vice president of research at Thales, a French defense giant that supplies reconnaissance drones to the British Army, told the House of Lords Artificial Intelligence Committee that rogue states and terrorists “will get their hands on lethal artificial intelligence in the very near future.” Echoing the same sentiment is Noel Sharkey, emeritus professor of artificial intelligence and robotics at University of Sheffield who fears “very bad copies” of such weapons would get into the hands of terrorist groups.

Not all agree that AI is all bad; in fact, its potential benefit humanity is immense.

AI can help fight terrorism

On the other side of the AI spectrum, Facebook announced that it is using AI to find and remove terrorist content from its platform. Behind the scenes, Facebook uses image-matching technology to identify and prevent photos and videos from known terrorists from popping up on other accounts. The company also suggested it could use machine-learning algorithms to look for patterns in terrorist propaganda, so it could more swiftly to remove it from the newsfeeds of others. These anti-terror efforts would extend to other platforms Facebook owns including WhatsApp and Instagram. Facebook partnered with other tech companies including Twitter, Microsoft and YouTube to create an industry database that documents the digital fingerprints of terrorist organizations.

Humans pushed out of life and death decisions

The overwhelming concern from groups who wish to ban lethal autonomous weapons such as the Campaign to Stop Killer Robots, is that if machines become fully autonomous, humans won’t have a deciding role in missions that kill. This creates a moral dilemma. And, what if some evil regimes use lethal autonomous systems on their own people?

As Mr. Wilby said, the AI “genie is out of the bottle.” As with other innovations, now that AI technology has begun to impact our world, we need to do our best to find ways to properly control it. If terrorist organizations wish to use AI for evil purposes, perhaps our best defense is an AI offense.

Follow me on Twitter or LinkedInCheck out my website or some of my other work here