A United Nations conference is seeking to ban autonomous killing machines. Basically, this refers to killer robots that make their own battlefield decisions, which would make war absolutely impersonal. The idea is that if someone is going to be killed, it should always ultimately be a human decision, not one made by a CPU.

If the past is to be a guide, just about every technology with lethal possibilities has been developed, not necessarily to be better than the enemy but to be on a par with them. 

Take a look at the following article, and then what are your thoughts?

**********

WHY THE UNITED NATIONS IS TALKING ABOUT KILLER ROBOTS
May 13, 2014
By ALYSSA NEWCOMB
Digital Reporter

Is it time to stop the Terminator in its tracks?

Some of the best and brightest leaders are meeting for a United Nations conference in Geneva, Switzerland, today to discuss what future threat killer robots could pose to the world, just like the part-man, part-machine cyborg that Arnold Schwarzenegger played in the Terminator film series.

Killer robots, or "lethal autonomous weapons systems" (LAWS) are machines that would be able to select their targets without direct human mediation. They don't fully exist yet, however the dystopian idea has led to the first-ever meeting on the issue.

"I urge delegates to take bold action," Michael Møller, acting director-general of the United Nations office in Geneva, told attendees, according to a United Nations statement. "All too often international law only responds to atrocities and suffering once it has happened. You have the opportunity to take pre-emptive action and ensure that the ultimate decision to end life remains firmly under human control."

Among the issues that will be addressed at the meeting are what levels of autonomy and predictability exist in robots and a future look at the next steps in robotic technology, according to an agenda.

A Human Rights Watch report issued on the eve of the meeting said the fully autonomous weapons systems could also "undermine human dignity." In 2010, South Korean officials announced the installation of several semi-autonomous robotic machine guns along its border with North Korea.

The Campaign to Stop Killer Robots, which describes itself as an international coalition of non-governmental organizations working to ban fully autonomous weapons, live tweeted some of the discussion today in Geneva, where a slew of government representatives shared their thoughts and concerns.

Ronald Arkin, a roboticist at the Georgia Institute of Technology, said he supports the "call for a moratorium" on the weapons, but told the attendees today he believes a ban would be premature, according to tweets about his presentation.

"It is not my belief that an unmanned system will be able to be perfectly ethical in the battlefield," Arkin said in 2007, according to the Washington Post. "But I am convinced that they can perform more ethically than human soldiers."

Later this year, the group plans to reconvene to discuss what action, if any, should be taken against the robots ... or if we're safe from them taking over the world, for now.

Tags: United Nations, killing machines, terminator

Views: 624

Reply to This

Replies to This Discussion

I don't think we are really that far apart. We could quibble over how we define our "mistakes" vs their "atrocities". Or over how, in any conflict, both sides are partly right and partly wrong. And maybe I'm just sensitive to anyone calling themselves the good guys; probably goes back to being told that everything god does is good by definition because he's good, no matter how horrific it is.

I don't think W just turned out to be incorrect about Iraq; I think he and his admin knew they were full of it, and that they used our highly emotional state (I wanted to mash the fuck out of somebody too) following 9/11 to manipulate us into that war. I thought so then, and have seen no reasons to think otherwise. Saddam was clearly a bad guy, but we ignore lots of bad guys. Why him? Why then?

Some situations call for action, no doubt. I just want to be sure it's necessary for big picture, good vs evil reasons and not just a self-interest, we-want-their-oil reason. I'm doubly suspicious of the ones who use spoonfed information to convince us it's necessary. The harder they sell it, the more suspicious of their motives I become. And don't forget our country is still run by godfundies too.

Beanie:  I was just reading your profile information.  Perhaps you should compare the first three paragraphs of your About Me section to the arguments you presented in our recent exchanges.

We always need to remember two things when it comes to war technology:

If we develop a high-tech weapon, before long the enemy will develop either a similar and possibly better technology, or a way to counter it.

We also need to remember that if we do not develop a technology first, we may soon be dealing with it in the hands of an enemy.

There's no big payoff to NOT developing a war technology.

I see it as a win/lose situation all around. 

We either create something to defend us with the possibility of it causing corruption and chaos...or it could be our main defense for years upon years.

But, as you said...I am sure someone, somewhere will come up with something bigger and better.

Killer robots will be developed because we as a species are drawn to war and death. It's partly the trait that made us successful as a species. Most of our great inventions were created during periods of war.

I'm puzzled as to what purpose we will put these killer robots, seeing as we already have drone planes and guided missiles. And if these robots are operated remotely, by human or computer, we will have to protect their activities from hackers.

Actually, unlike the stories by Isaac Asimov, the existing robots we have are all potentially capable of killing. We don't have the "three laws of robotics" envisaged by Asimov, and none of the machinery we have built to date contain any kind of moral code. Effectively your OP asks whether we should use our robots for killing. Should we tailor their construct towards military useage?

Again, the scale of armaments we already have (from drone to nuclear bomb) seem to me to be far more destructive that a potentially lethally equipped robot.

Most of our great inventions were created during periods of war.

It's discouraging to think of how this is true. For instance, nuclear weapons emerged from world war two but so did the technology for the nuclear power, which supplies 75% of the electricity used in formerly Nazi-occupied France. 

It's encouraging to think that rivalry (such as competing ideologies) is a suitable substitute for war. For instance, most of the great inventions since the 1970s are in technology and most of those (including microprocessors) are results of the Apollo moon landings.

And if these robots are operated remotely, by human or computer, we will have to protect their activities from hackers.

You're right, with the recent downing of a US drone over Crimea being one example. I think this need is part of what's driving the development of autonomous robotic systems. A flying robot that operates independently and that ignores incoming radio signals during sensitive times of its mission is all but impervious to hacking.

We already have primitive forms of artificial intelligence, but artificial judgment calls-- such as whether or not to blow up a prime terrorist target who is standing in a marketplace full of innocent people-- introduces a whole new set of challenges.

Actually, unlike the stories by Isaac Asimov, the existing robots we have are all potentially capable of killing. We don't have the "three laws of robotics" envisaged by Asimov, and none of the machinery we have built to date contain any kind of moral code.

Not yet, but as autonomous robotic systems become increasingly sophisticated, the need to develop them with self-governing systems of ethics is growing.

The US Navy just awarded $7.5 million in grant money to researchers at Tufts, RPI, Brown, Yale and Georgetown to develop ways to build robots with a sense of right and wrong. What they'll come up with is anybody's guess, but at least conceptually the 'laws of robotics' may be coming in one form or another.

Robots will make mistakes, just like people. However, unlike people they will only follow a stock program applied to all robots (of that type). People tend to make decisions individually. Each person has an individual program comprised of a combination of hard wiring and lifetime experiences and any randomness thrown in by defective physiology, cosmic rays, etc.

Robot mistakes will be things like potential targets meeting the parameters describing a target only the subject wouldn't belong to the target class (imagine an exterminating robot mistaking a small rat for a mouse, for example). And then, in the case of a war like Kosovo where combatants on both sides may have looked and dressed in very similar ways.

Robots will be unemotional but they would still be capable of error.

Robot mistakes will be things like potential targets meeting the parameters describing a target only the subject wouldn't belong to the target class (imagine an exterminating robot mistaking a small rat for a mouse, for example).

You're probably on the right track with the concept of target parameters. They wouldn't necessarily have to be complex, either. For instance, robot machine guns could be programmed to shoot any object of a certain size that enters a designated 'kill zone', like a hallway or a hayfield.

A 'kill zone' robotic gun that secures part of a war zone could be a better alternative to dropping cluster munitions from planes to create minefields (with no regard for what happens after the war ends).

But a killer computer mistaking rats for mice (or full-sized adults for small children) really is no consideration.

Robots will be unemotional but they would still be capable of error.

Assuming you mean fully independent and automated robots, we're actually talking about the computer systems which control the robots.

Computers are capable of error in the sense that means breaking down (for reasons including physical damage, hardware failure, interference from radiation, or GIGO).

But a computer making an error that means 'mistake'? Never. Human programming gives computers an illusion of cleverness, but they are utterly mindless, like wind-up dolls or falling dominoes, and cannot fail.

Once a program is executed it either breaks down or does exactly and precisely what it's programmed to do. The outcome may be unexpected, but it's always explainable by the factor that humans did the programming.

RSS

Services we love!

We are in love with our Amazon

Book Store!

Gadget Nerd? Check out Giz Gad!

Advertise with ThinkAtheist.com

In need a of a professional web site? Check out the good folks at Clear Space Media

© 2014   Created by umar.

Badges  |  Report an Issue  |  Terms of Service