The military should avoid turning the conduct of warfare over to machines, because nowadays one should be focused on ending war, not extending it into a robot era, Mark Gubrud, physicist and expert on emerging technology and human security told RT.
Robotics firm 'Boston Dynamics' has designed a ‘Terminator-style’ robot named Atlas. It's been created to replicate the ability to balance and move the way humans do.
RT: This robot is very agile, but the company that makes that is also making robots for military use. What sorts of concerns do you have about that?
Mark Gubrud: Absolutely. I think that Atlas Robot has got a lot of attention because it is a humanoid robot, it looks a lot like the Terminator, and it was developed for the DARPA Robotics Challenge, which is pitched as disaster-relief, but everybody understands that they are ultimately interested in robot soldiers as well.
I would tend to look though more at developments in the area of autonomous missiles and drones, undersea and oversea systems that are things that look more like conventional weapons - tanks as well, and sentry systems - that are robotized, that have artificial intelligence embedded in them which allows them to make the decision to attack a particular target on the basis of some sort of criteria.
Read more: ‘Terrifying’ Terminator-like robot let loose in US woods (VIDEO)
The US at the moment is proceeding quite a lot, with what they call, semi-autonomous weapons systems. Ostensibly they are hunting for something that a human has already decided should be attacked. But then you send the robot out on this mission to find an object that meets that description. Of course if it finds more than one, as this typically is going to happen in a war- there is going to be more than one tank of the same type, more than one ship of the same type- the weapon is going to have to decide which one to attack. So it is essentially choosing targets.
RT: But there are benefits too - presumably if you have these robots, you don’t have to send soldiers to do the same job.
MG: Well, that’s right, but I’m not sure that this means, in the long run, that soldiers or anybody is going to be safer. I fear the rise of a new or reheated strategic arms race [between] three major powers: the US, Russia, and China, and other major powers. This is not going to make anybody safe. Now people say that these autonomous weapons might be more selective and could actually protect civilians, do a better job of not hitting targets you don’t want to hit. But there is no particular reason why you have to take humans out of the loop in order to do that. If you have some kind of technology artificial intelligence centers, whatever, that enables humans to make a better dissension as which target to attack- you don’t have to take human out of the loop in order to use that.
RT: There might be a lot benefits from having technology like this. How would you like the robots being used?
MG: Robots can be used to do useful work for people. We should judge everything they do on that standard: is that helping people or not. But I think that turning the conduct of warfare over to machines is extremely dangerous - it’s something that we should avoid simply because we need to be ending war, not extending it into a robot era.
But that could be an arms race- what these things can lead to is extremely dangerous. I support an agreement that would implement an international requirement for accountable human control and decision in every use of a weapon. This could be verifiable if this nation would just keep records if they have a remotely operated weapon, have some records that show that a human made the decision to fire that weapon; keep those records encrypted and secret unless somebody comes along and says: “Oh, I was attacked by an illegal autonomous drone, and they could prove that it was not autonomous.”
The statements, views and opinions expressed in this column are solely those of the author and do not necessarily represent those of RT.
LISTEN MORE:
The statements, views and opinions expressed in this column are solely those of the author and do not necessarily represent those of RT.