Putin Says Whoever Has the Best Artificial Intelligence Will Rule the World

There’s a larger debate over autonomous weapons that raises both functional and philosophical questions.

byJoseph Trevithick|
Russia photo
Share

For some time now there has been a steadily growing debate over the potential benefits and hazards of artificial intelligence, or AI, drawing opinions from technology pioneers, futurists, scientists, philosophers, and most recently Russian President Vladimir Putin. Especially when it comes to the military, the core issues have become almost as much existential as technical, with some suggesting that a future with “killer robots” could be one without humans.

Putin brought up the topic during an open lesson on science and technology with school children on Knowledge Day, Sept. 1, 2017. The Russian leader also discussed space, medicine, and the potential of the human brain, according to Russian media outlet RT.

“Artificial intelligence is the future, not only for Russia, but for all humankind,” He explained. “It comes with colossal opportunities, but also threats that are difficult to predict. Whoever becomes the leader in this sphere will become the ruler of the world.”

Putin’s comments followed a spat over the broad issues surrounding AI between Elon Musk, best known as the founder of high performance electric car company Tesla and space launch firm SpaceX, and Mark Zuckerberg, the co-founder of social media powerhouse Facebook. Musk is a critic, perhaps even an alarmist, about the potential dangers of autonomous machines, while Zuckerberg, who is investing heavily in self-driving cars among other things views the technology with great optimism.

Putin shares the stage with a number of teenagers while giving his "open lesson" on Knowledge Day on Sept. 1, 2017., Sergey Guneev/Sputnik  via AP

Musk has been an increasingly vocal opponent of AI since at least 2014, when he was an investor in DeepMind, a startup working on the technology that Google eventually purchased. In July 2017, in a talk at the summer meeting of the National Governors Association, Musk raised his fears again, warning about an eventual machine takeover straight out of a Hollywood movie.

“I have exposure to the very cutting edge AI, and I think people should be really concerned about it,” he said, likely referencing, at least in part, his interactions with DeepMind. “I keep sounding the alarm bell, but until people see robots going down the street killing people, they don’t know how to react, because it seems so ethereal.”

Elon Musk., AP Photo/Ringo H.W. Chiu

Later in July, a Facebook user asked Zuckerberg about his opinions of Musk’s perspective during a live question and answer session on the social media platform from his home in Palo Alto, California. He was blunt in his dismissal of any concerns about AI.

"I have pretty strong opinions on this. I am optimistic," Zuckerberg said. "And I think people [like Musk] who are naysayers and try to drum up these doomsday scenarios – I just, I don't understand it. It's really negative and in some ways I actually think it is pretty irresponsible."

Mark Zuckerberg., AP Photo/Steven Senne

Musk then took to Twitter, saying he and Zuckerberg had spoken about AI in the past that “his understanding of the subject is limited.” The Tesla and SpaceX CEO added that there was a “movie on the subject coming soon...,” but declined to give a title or say whether he was involved in its production.

Now, neither Musk or Zuckerberg – or Putin for that matter – were specifically talking about the military applications of AI. However, the two viewpoints very well encapsulate the multi-sided discussion about the future of the technology on the battlefield.

On the one hand, there is a series of largely technical questions. How does one go about inserting a machine capable of autonomous decision making into the process? What tasks can it handle? How responsive and accurate is it? What vulnerabilities does it expose?

From his public statements, this seems to be the realm Zuckerberg is most concerned with and he sees those problems as surmountable. If the issue is a concern about driverless cars making a bad decision and killing pedestrians or their occupants, then the answer is to find a way to fix that problem.

"One of the top causes of death for people is car accidents still and if you can eliminate that with AI, that is going to be just a dramatic improvement," Zuckerberg had said during his online Q&A. “Whenever I hear people saying AI is going to hurt people in the future, I think yeah, you know, technology can generally always be used for good and bad, and you need to be careful about how you build it and you need to be careful about what you build and how it is going to be used," he added later on.

This seems to be how many in the U.S. military view the topic, as John Launchbury, the director of the  Information Innovation Office at the Defense Advanced Research Projects Agency explains in depth in the video above. For the Pentagon and the individual services, AI is just another tool that helps reduce the workload for its personnel and may help speed up and simplify a host of processes, from administrative and logistics matters to surveillance and targeting during actual combat operations, making everything more efficient and reducing costs. Perhaps most importantly, fewer humans on the battlefield inherently means less casualties, which has long been an argument in favor of increasing the use of drones in the air, on land, and at sea.

The U.S. military as a whole has been increasingly interested in autonomous weapons and high-end AI – which don't necessarily have to go hand-in-hand – as part of the so-called Third Offset Strategy, which seeks advanced technological solutions to keep American forces ahead of their opponents, even as their overall numbers shrink. We’ve already seen early iterations of the technology help alert maintenance personnel of potential problems, automate the management of supply inventories, and analyze data. They can even run autonomous or semi-autonomous

drones, weapons, and sensors, which can spot, track, and engage targets without continuous human interaction.

In August 2017, the U.S. Air Force, as part of a joint project with U.S. Special Operations Command, asked for information on AI systems that could allow aircraft crews flying a notional light attack plane to find their targets faster by filtering out bad or conflicting information. The request for proposals said the service would be willing to consider equipment that then made autonomous decisions about whether or not to launch a bomb or missile or one that simply gave a recommendation to the human operator.

Earlier in September 2017, the U.S. Army hired computer giant IBM to help manage various logistics inventories, including seeing what its intelligent cloud computing system Watson might have to offer. The year before, the service had installed sensors on 350 Stryker armored vehicles that all fed data into Watson, which in turn sifted through the information to try and predict maintenance issues before they even occurred. This allowed units to speed up the maintenance process by sending entire batches of vehicles off to receive the same preventive repairs at once instead of reacting to the problems as they came up or based on maintenance schedules specific to  individual vehicles.

Putin seems to have a similar view of AI as it applies to the military realm, previously calling for increased development of autonomous weapons. There have been similar pushes in China, Israel, and South Korea, among others countries.

There are "colossal opportunities and threats that are difficult to predict now," Putin said on Knowledge Day, promising to share AI developments with other countries. However, he posited that future wars would be almost entirely the realm of unmanned aircraft, vehicles, and ships, adding that "when one party's drones are destroyed by drones of another, it will have no other choice but to surrender."

The War Zone's own Tyler Rogoway had come to a similar conclusion, specifically about unmanned aerial combat vehicles, in June 2016, writing:

The fact is that if we don’t aggressively field this technology our potential enemies, who give far less consideration when it comes to the morality of robotic warfare, will. In fact, Russia and China both have ongoing UCAV programs, and although they remain in a fairly immature state and their sub-systems and low-observable designs are likely quite far behind American capabilities, they will steadily improve. Additionally, quantity is a quality all its own when it comes to UCAV swarms. Just because a foe can’t build the best UCAV imaginable doesn’t mean a large force of inferior types is not a potentially very deadly threat to say the least.

Right now, by all indicators the US is leading in the unmanned aircraft department and especially in the low observable (stealth) one. Yet European consortiums have begun to let the UCAV genie out of the battle, with the highly promising BAE Taranis and the Dassault-led nEURon program rapidly evolving, the US has a grand but limited window of opportunity to pull away fully from the pack when it comes to this technology. And this won’t happen keeping the technology buried in the black world or not pursuing it really at all.

Of course, increasing the networked nature of weapon systems and inserting technology into more aspects of day-to-day operations isn’t without potential dangers. The AI that troops come to rely on could fail itself or be the victim of an actual enemy strike, including a cyber attack.

We at The War Zone have highlighted the increasing importance of cyber security in general and specifically with regards to the Automated Logistics Information System that goes along with the F-35 Joint Strike Fighter, which you can read about in detail here. Needless to say, as AI become more advanced a prevalent, these issues will only grow in significance. A hostile force may no longer be primarily concerned with physically destroying weapons and equipment, but in breaking into them in order to sow confusion and misinformation or even turn them against their operators.

It’s that latter point that seems to be of greatest concern to Musk and others, who warn that the idea of autonomous systems, including weapons, “evolving” so to speak to a point that they begin operating independent of any human controls is no longer limited to the realm of science fiction, the most popular comparison being to James Cameron's iconic robotic apocalypse movie Terminator. In July 2017, there were a run of sensational reports suggesting that just such a thing had happened to Facebook’s AI researchers.

The real story was that the company’s engineers had crafted two AI “chatbots” and pitted them against each other to test how well their code worked at negotiations. What they found was that in the absence of any rules regarding their language, the two pieces of software quickly began “speaking” to each other in a syntax that clearly appeared simpler for both to understand, which is not necessarily uncommon.

The problem was that it was unintelligible to the researchers and therefore of no value, since they couldn’t analyze the data. They refined the code to make sure the two bots communicated in regular English.

But the speed with which the computerized personalities devised their own communication mechanism is still no doubt worrisome to individuals like Musk. The basic argument put forward by these critics is that if something as simple as a piece of software churning out text can effectively cut out the operator, why couldn’t an autonomous armed drone do the same thing?

“I think they’re [AIs] really improving at an accelerating rate, far faster than people realize,” Musk told Vanity Fair in April 2017. “Mostly because in everyday life you don’t see robots walking around. Maybe your Roomba or something. But Roombas aren’t going to take over the world.”

So, it’s no surprise that the Tesla and SpaceX founder has joined a group of other technologists and academics to call for a global ban on “lethal autonomous weapons systems,” or LAWS, more commonly dubbed “killer robots.” In 2015, another collective, including physicist Stephen Hawking and Apple co-founder Steve Wozniak, made a similar appeal.

For many of these activists, the matter is an existential question. For other international organizations and advocacy groups, it is more of an ethical concern.

In these instances, the concern is more that weapon systems making their own judgment calls on the battlefield threaten the very notion of a state’s monopoly on the use of force by effectively delegating life-or-death decisions to an algorithm. It also raises complex questions about national sovereignty and the duty of national militaries to protect civilian life on the battlefield.

Since 2013, the United Nations Human Rights Council has been exploring the issue, specifically with relation to concerns about the use of drones to conduct extrajudicial target killings. In April of that year, Christof Heyns, a lawyer who then held the title of the Council’s Special Rapporteur on Extrajudicial, Summary or Arbitrary Executions, issued a report calling for a worldwide moratorium on the development or fielding of killer robots until the international community could craft a workable legal framework for them to operate within.

Christof Heyns., AP Photo/Keystone, Salvatore Di Nolfi

“They raise far-reaching concerns about the protection of life during war and peace,” he wrote. “This includes the question of the extent to which they can be programmed to comply with the requirements of international humanitarian law and the standards protecting life under international human rights law. Beyond this, their deployment may be unacceptable because no adequate system of legal accountability can be devised, and because robots should not have the power of life and death over human beings.”

At the same time, the signatories to the Convention on Certain Conventional Weapons have been exploring whether or not to add a new protocol to the treaty covering LAWS. The conventional already includes restrictions on the use of land mines and booby traps, incendiary weapons, blinding lasers, and other conventional arms in warfare.

As far as the U.S. military is concerned, these are non-issues. In 2012, the Office of the Secretary of Defense issued an official policy directive about “autonomy in weapon systems.”

Representatives of the signatories to the Convention on Certain Conventional Weapons hold a conference in 2013., Kyodo via AP Images

“Autonomous and semi-autonomous weapon systems shall be designed to allow commanders and operators to exercise appropriate levels of human judgment over the use of force,” it explains in no uncertain terms. On top of that, anyone employing one of the systems has to do so in accordance with international laws and the established rules of engagement, and every system should have failsafes to prevent the weapons from going rogue if they lose contact with their human overseers.

As AI becomes more commonplace throughout society, it will no doubt prompt more debate and additional calls for international regulations or bans on killer robots, especially since it’s simply a truism that no system is ever 100 percent reliable. Protective measures are only as good as the engineers who build them and history is replete with examples of safety mechanisms on automated military hardware failing at the worst times.

For Zuckerberg and other supporters of an AI-driven future, these are just problems needing solving. For Musk and his colleagues, this margin of error is fine for robot vacuum cleaners, but a nightmare that could end mankind when it comes to autonomous weapons.

Putin seems to think that it could give someone the ability to dominate the future of technology, if not more.

Contact the author: joe@thedrive.com

stripe