Robot Wars: The Brave New World

View Comments
Chas Interviews CJCS GEN Martin Dempsey, October 17, 2012

Chas Henry interviews CJCS GEN Martin Dempsey, October 17, 2012. (credit: Dept. of Defense)

Chas Henry Chas Henry
Middays: 10 a.m. to 2 p.m. Chas Henry has reported for ...
Read More
Latest News

In future wars, will human soldiers be replaced by weapons that think for themselves? Lots of remote control systems are already on the battlefield. Chas Henry, All News 99.1’s national security correspondent, met with warfighters, scientists, critical analysts and the nation’s top military officer to explore the world of … ROBOT WARS.

By Chas Henry, CBSDC.com

WASHINGTON – When we humans go to war, our least favorite way is hand-to-hand, face-to-face.

“It speaks to human nature,” says MIT Professor Missy Cummings, a former Navy fighter pilot. “We don’t really like to kill, and if we are going to kill, we like to do it from far away.”

Over centuries, that’s led to creation of weapons that allow us to separate ourselves from our adversaries — first by yards, then miles. Now, technology allows attacks half a world away.

Until a decade ago, most of the remote engagement capability was owned by the U.S. or Israel. Not anymore.

Unmanned platforms –- in the air, on the ground, and on or under the water — are becoming less and less expensive. So are the sensors that help guide them. And nanotechnology is making them smaller.

Today, U.S. soldiers in Afghanistan launch throw-bots into the air by hand, and mini-helicopters deliver frontline supplies by remote control. Adding artificial intelligence to the mix, we’re now seeing some platforms operating without even remote human control. An unmanned aircraft flown by an internal computer recently refueled another unmanned plane – in the air – as it, too, flew completely on its own.

These tools of remote engagement (the people operating them don’t like to call them ‘drones’) are already changing modern battlefields. And some people worry we may not be giving enough thought to how much they’re going to change things.

Simon Ramo has been thinking about this for a long time. At 99 years old, he knows something about national security. Remember the defense firm TRW? He’s the ‘R.’

“A huge revolution in cost, in loss of lives, takes place,” Ramo says. “If you go to the partnership of man and machine — and let the robots do the dying.”

Such a partnership, he says, does more than save life and limb. It also saves the huge expense of maintaining a big military presence overseas.

Peter Singer of the Brookings Institution agrees that remote engagement allows modern military forces to “go out and blow things up, but not have to send people into harm’s way.”

But he says robot wars are much more complex than that.

“Every other previous revolution in war has been about a weapon that changed the ‘how,'” Singer says. “That is, a machine or system where it either went further, faster or had a bigger boom.”

Robots, he says, fundamentally change who goes out to fight very human wars.

“It doesn’t change the nature of war,” says Gen. Martin Dempsey, chairman of the Joint Chiefs of Staff. “But it does in some ways affect the character of war.”

The nature of war, Dempsey says, is a contest of human will. The character, on the other hand:

“What do you intend? How do you behave with it? And then what’s the outcome you produce?”

“This is not a system which we’ve just simply turned loose,” Dempsey says. “It’s very precisely managed, and the decisions made are made by human beings, not by algorithms.”

What capability are those humans managing? Battlefield commanders say an ability to provide persistent surveillance and the intelligence that comes from it is most important.

“When you have an aircraft that can fly over an evolving battlefield, and in an unblinking way observe the battlefield,” says Air Force Lt. Gen. Frank Gorenc. “They have the ability to describe to manned aircraft that are coming in, that can provide the firepower, much more accurate data.”

Commanders whose unmanned systems roam on the ground or in and under water gain similar benefits. That’s why many say, “don’t call them drones.”

In military use, drones are dumb-down vehicles capable of following only a predetermined path. In the air, pilots in smart planes used drones as targets. So while most people around the world have come to call them drones, the people operating them prefer unmanned systems.

Well, some of them.

Gorenc says even if there’s no one in the driver’s seat, it takes a lot of humans to keep the systems working.

“There’s hardly anything unmanned about it,” he says. “Even in the most cursory of analysis. So it takes significant resources to do that mission.”

A mission that’s possible because as the vehicles have developed, so too have the sensors providing them an understanding of precisely where they are at any given time, and optics that have improved the images they collect and send back.

Besides loitering for hours or days over places commanders want to keep an eye on, what can these systems do? We’ll likely see more unmanned craft delivering supplies, meaning air crews or truck convoys will be put in less danger.

Dempsey says it’s possible, too, that a wounded soldier could soon be put inside a remotely piloted aircraft for evacuation to a field hospital.

“Logistics resupply and casualty evac could certainly be a place where we could leverage technology and remote platforms,” he says.

And of course, as Georgetown University Professor Daniel Byman notes, some unmanned systems –- most notably the Predator drone –- can kill.

“It’s that persistent intelligence capability, to me,” Byman says. “That enables the targeting of individuals — where before you wouldn’t — in part because of the risk to the pilot, but also in part because you weren’t sure what else you might hit. And now you can be — not a hundred percent confident -– but more confident than you were.”

There’s been controversy about the two ways those drones deal death, by targeted or signature strikes.

“A targeted strike is based on a positive identification of a particular individual or particular group of individuals,” says Christopher Swift of the University of Virginia’s Center for National Security Law. “Whether they’re moving in a convoy, or whether they’re at a fixed location, or whether they’re out on the battlefield.”

Signature strikes, on the other hand, use sensors to watch for trends of behavior that seem suspicious then launch an attack when it appears to a computer algorithm that the series of behaviors point to bad guys doing, or getting ready to do, bad things.

Signature strikes bring with them a greater risk of killing or wounding people seen as innocents. And death by remote control can be perceived as callous, prompting a backlash.

While recently in Yemen, Swift talked with a number of tribal leaders about the unmanned system attack that killed terrorist provocateur Anwar al-Aulaqi.

“They were more concerned about the drone strike on his 16-year-old son,” Swift says. “Because they saw him as a minor, rather than as a militant, and there was some sympathy for him,” even though Swift says many of the same people thought the boy’s father got what he deserved.

Some civil liberties groups challenge the legality of both targeted and signature strikes. But Swift believes “international law is not a restraint on our ability to do it. It’s a series of guidelines that tell us the things we should avoid in order to do these kinds of operations better.”

A key aspect of better, Swift says, is ensuring that remote engagement is always paired with human contact.

“You can’t get to the human dimension of managing these political and social relationships at a local level,” Swift says. “And understanding how local people see their own security issues if we’re just fighting these wars using drones, if we’re fighting from over the horizon.”

Not everyone acquiring unmanned craft will interested in used of nuanced tactics. Reports in early October, for instance, indicated that Hezbollah fighters may have flown an unmanned surveillance craft over sensitive sites in Israel.

Who’s selling to customers the U.S. and Israel won’t sell to?

China is in the game.

Siemon Wezeman and Chas at Stockholm International Peace Research Institute 1. (credit: CBSDC)

Siemon Wezeman and Chas at Stockholm International Peace Research Institute 1. (credit: CBSDC)

“They have imported, and actually stolen, a lot from Russia,” says Siemon Wezeman, who researches proliferation of unmanned systems at the Stockholm International Peace Research Institute. “They are now really on the way of developing technology which is getting on par with what you would expect from Western European countries.”

And Wezeman says more and more nations and groups are shopping for the technology.

“You see in the last few years even poor and underdeveloped countries in Africa getting involved in acquiring them, and in some cases even thinking about producing them.”

He says the most presently-available unmanned aerial vehicles (UAVs) are the kind used for surveillance.

“Most of them still are unarmed. There are very few armed UAVs in service. But the development is in the direction of armed UAVs.”

In some ways, remote control war could prove a more effective tactic for small groups of bad guys, says National War College Professor Mike Mazarr, offering his personal opinion, not necessarily that of the Defense Department.

“I think very often the U.S. is going to be trying to use them to achieve big national-level goals that are very challenging and difficult,” Mazarr says. “And other actors are going to be trying to achieve much more limited, discrete goals — to keep us from doing certain things.”

The use of any robots scares some people, who worry about machines making potentially disastrous mistakes. Advocates of the technology offer the reminder that to err is human.

“Who makes more mistakes: humans or machines?” asks Georgetown’s Daniel Byman. “The answer, of course, is: it depends. But often machines can avoid mistakes that humans would otherwise make.”

“It may take a human to do a final check on an engine, or turning the last centimeters on a screw,” says Dean Cheng, an analyst at the Heritage Foundation. “But getting the screws to that mechanic could well become a robotic function. And it would be faster, and probably more accurate.”

Robotic accuracy could bring improved safety to even manned aircraft when it comes to taking off and landing.

Retired Rear Adm. Bill Shannon, who until recently oversaw unmanned aircraft initiatives in the Navy, says, when onboard robotic systems interact with GPS and other sensor data, planes automatically “know their geodetic position over the ground. They land with precision, repeatable precision, regardless of reference to the visual horizon.”

MIT’s Missy Cummings says the U.S. Air Force, at first, insisted that human operators control the take-offs and landings of its remote aircraft. They turned out to be more accident-prone than robotic systems.

“From Day 1,” notes Cummings, “all the Army’s UAVs had auto land and take-off capability. And as a consequence they haven’t lost nearly as many due to human error in these areas.”

Still, after watching failures in some other supposedly smart systems –- automated trading software on Wall Street, for instance –- many say they fear movement toward unmanned systems that think for themselves.

“If you optimize [these systems] to work very quickly to try to take shots that we’d otherwise miss, you’ll have more mistakes,” Byman says. “If you optimize them to be very careful, you’ll miss opportunities. So there are going to be costs either way.”

The Army is funding research at Georgia Tech into whether it’s possible to create an “artificial conscience” that could be installed in robots operating independently on a battlefield.

“There’s nothing in artificial intelligence or robotics that could discriminate between a combatant and a civilian,” says Noel Sharkey, a professor at the University of Sheffield in the UK. “It would be impossible to tell the difference between a little girl pointing an ice cream at a robot, or someone pointing a rifle at it.”

“As you begin to consider the application of lethal force,” adds Dempsey, “I think you have to pause and understand how to keep the man in the loop in those systems.”

So what if a battlefield robot does goes haywire? Who is responsible?

“How do you do legal accountability when you don’t have someone in the machine?,” asks Singer. “Or what about when it’s not the human that’s making the mistake, but you have a software glitch? Who do you hold responsible for these incidents of ‘unmanned slaughter,’ so to speak?”

“It could be the commander who sent if off,” Sharkey says. “It could be the manufacturer, it could be the programmer who programmed the mission. The robot could take a bullet in its computer and go berserk. So there’s no way of really determining who’s accountable, and that’s very important for the laws of war.”

That’s why Cummings thinks we won’t soon see the fielding of lethal autonomous systems.

“Wherever you require knowledge, decisions being made that require a judgment, require the use of experience. Computers are not good at that, and will likely not be good at that for a long time.”

That shouldn’t stop or slow the incorporation of robots where they can do better than humans, though. That’s the view of those who chafe at what they call a lack of imagination in the use of robotics.

“There are some generals who assume that the role of robots is to help the human being they assume is still going to be there,” Ramo says. “We’re talking about warfare being changed so that you should quit thinking about the soldier. He shouldn’t be there in the first place.”

And robots shouldn’t necessarily look like people either, critics say, pointing to a robot being created to fight fires on-board Navy ships. It walks around on two legs, about the height of a sailor carrying a fire hose. Rear Adm. Bill Shannon says problems sometime result when people who built manned systems try to create something similar, just minus the human.

“They don’t need to give the operator the pilot’s view,” he says, by example. “They can give them, for example, a God’s-eye view of the air vehicle and the sensors interacting with the environment — as opposed to a very, very narrow view of what a pilot might see as they look out their windscreen.”

Shannon says he would frequently look for innovative design ideas from people not tied to systems built around human pilots.

“Often I see it when I get someone who’s come from outside of aviation,” he says. For example, someone with experience “creating that environment in the gaming industry.”

The brave new world of robot wars could well require the nation to field a new type of warrior, as well.

“The person who is physically capable and mentally capable of engaging in high-risk dogfights may be very different from the person who is a very good drone pilot,” Byman says.

Cummings anticipates some in the military will find it difficult to accept such a shift.

“Fundamentally, it raises that question about value of self. ‘If that computer can do it, what does that make me?'”

In the end, robots thrown into war efforts are put there for one reason: To win. Would it be possible to win a war by remote control?

“You could put together an elaborate strategy that would affect the society, the economy, the national willpower of a country that — I could certainly imagine — depending on what was at stake, the legitimacy of its government, a variety of other things — of absolutely winning a war in these ways,” Mazaar says.

The nation’s top military officer isn’t so sure.

“It’s almost inconceivable to me,” Dempsey says. “That we would ever be able to wage war remotely. And I’m not sure we should aspire to that. There are some ethical issues there, I think.”

Another ethical consideration is raised by those who worry that because remote engagement seems “bloodless” to those employing it.

“It always creates the risk that you’ll use it too quickly,” says Byman. “Because it’s relatively low cost, and relatively low risk from an American point of view, [it’s possible] that you’ll be likely to use it before thinking it through. Use it even though some of the long term consequences might be negative.”

“You could increasingly be in a world where states are constantly attacking each other,” Mazarr says. “In effect, in ways that some people brush off and say, ‘well, that’s just economic warfare,’ or ‘it’s just harassment,’ but others increasingly see as actually a form of conflict.”

Finally, it’s worth noting that the sensor information, so important to controlling unmanned systems, flows through data networks –- webs susceptible, at least in theory, to being hacked.

“When you’re in the creation of the partnership of human beings and robots, you’re into cyber warfare,” according to Ramo. “And you’ve got to be better than your enemies at that, or your robotic operations will not do you very much good.”

Susceptibility to being attacked with remote systems leads Mazarr to ask if the U. S. –- with its highly interlinked, interdependent economy –- might do better to try to limit the use of remote control systems rather than expanding their use.

“Given the likely proliferation of these kind of things to more and more actors, [and] given the vulnerability of the U.S. homeland, given the difficulty we have as a society in taking the actions necessary to make ourselves resilient against these kind of attacks — would it be better to move in the direction of an international regime to control, or limit, or eliminate the use of some of these things?”

Jody Williams thinks so.

In 1997 she was awarded the Nobel Peace Prize for a campaign that created an anti-landmine treaty.

“I know we can do the same thing with killer robots,” Williams says. “I know we can stop them before they ever hit the battlefield.”

She’s working with the group Human Rights Watch in an effort to do so.

View Comments
blog comments powered by Disqus
Follow

Get every new post delivered to your Inbox.

Join 1,836 other followers