A United Nations conference is seeking to ban autonomous killing machines. Basically, this refers to killer robots that make their own battlefield decisions, which would make war absolutely impersonal. The idea is that if someone is going to be killed, it should always ultimately be a human decision, not one made by a CPU.

If the past is to be a guide, just about every technology with lethal possibilities has been developed, not necessarily to be better than the enemy but to be on a par with them. 

Take a look at the following article, and then what are your thoughts?


May 13, 2014
Digital Reporter

Is it time to stop the Terminator in its tracks?

Some of the best and brightest leaders are meeting for a United Nations conference in Geneva, Switzerland, today to discuss what future threat killer robots could pose to the world, just like the part-man, part-machine cyborg that Arnold Schwarzenegger played in the Terminator film series.

Killer robots, or "lethal autonomous weapons systems" (LAWS) are machines that would be able to select their targets without direct human mediation. They don't fully exist yet, however the dystopian idea has led to the first-ever meeting on the issue.

"I urge delegates to take bold action," Michael Møller, acting director-general of the United Nations office in Geneva, told attendees, according to a United Nations statement. "All too often international law only responds to atrocities and suffering once it has happened. You have the opportunity to take pre-emptive action and ensure that the ultimate decision to end life remains firmly under human control."

Among the issues that will be addressed at the meeting are what levels of autonomy and predictability exist in robots and a future look at the next steps in robotic technology, according to an agenda.

A Human Rights Watch report issued on the eve of the meeting said the fully autonomous weapons systems could also "undermine human dignity." In 2010, South Korean officials announced the installation of several semi-autonomous robotic machine guns along its border with North Korea.

The Campaign to Stop Killer Robots, which describes itself as an international coalition of non-governmental organizations working to ban fully autonomous weapons, live tweeted some of the discussion today in Geneva, where a slew of government representatives shared their thoughts and concerns.

Ronald Arkin, a roboticist at the Georgia Institute of Technology, said he supports the "call for a moratorium" on the weapons, but told the attendees today he believes a ban would be premature, according to tweets about his presentation.

"It is not my belief that an unmanned system will be able to be perfectly ethical in the battlefield," Arkin said in 2007, according to the Washington Post. "But I am convinced that they can perform more ethically than human soldiers."

Later this year, the group plans to reconvene to discuss what action, if any, should be taken against the robots ... or if we're safe from them taking over the world, for now.

Views: 853

Reply to This

Replies to This Discussion

How does it work?  Does one declare war, tell the enemy "en garde!", and then attack?  Or does one declare war by attacking?  If it's the latter, every war is started by a brutal peacetime sneak attack.

Some suggest that Washington knew in advance of the attack on Pearl Harbor but kept that information from commanders on the ground.  This was done, some suggest, so the attack would be a surprise and it's effect would be more profound, giving Washington the impetus it was looking for to attack Japan.  I think about 80% of Americans were against entering the war before Pearl Harbor.  Perhaps that's all conspiracy theorist claptrap.  But it sounds remarkably similar to conspiracy theories surrounding 9/11 and Iraq.  Could history be repeating itself?

That's pretty black and white thinking.

Pretty much. It's an essential rule not to wage war unless we think we're the good guys, defending ourselves from bad guys. I only argued for using more precise weaponry, pointing out how traditional weaponry is usually more destructive than necessary. I'll even argue that precision battle can provide more humane results than traditional weaponry even when we're the bad guys. The real solution to your black and white scenarios of moralism is ths: If and when we're the bad guys, we should not be waging war with any kind of weaponry.

The internets crashed before I could finish writing.

I assume that your answer to my question about Al Qaeda and Boko Haram is that we shouldn't even bother them, because we're always just bad guys, relative to their cuddly goodness. That was the point you were trying to make, bringing up our use of atomic bombs and such, right? However, I still say (while remaining on topic), maybe a few intelligent robots and drones could have administered a kind of shock and awe to a few critical Japanese people, without devastating populations, Bush style.


WTF? Did you even read what I said? I didn't say we're the bad guys, or monsters. I only said our hands are not clean, meaning maybe we should tone down the self-righteous "good guys" talk.

Did you say MY black and white scenarios of moralism? Which ones are those? You are the one talking about good guys and bad guys, as if there are only two categories.

And by the way, in a war, BOTH sides think they are on the side of right.

I agree that more precise weaponry is preferable to carpet-bombing. I just worry that we, the American people, could become complacent about war if it's fought without our soldiers in harm's way. We would need to be MORE vigilant about choices our leaders make. After all, they have lied to get us into wars before.

WTF? Did you even read what I said?

Yes, I did. In case I wasn't clear enough, my point was that the black and white thinking (which you accused me of first) is necessary, at least in considering right vs wrong, good vs evil, and so on. I think it's wrong to make war unless we've seriously considered that we're doing it for morally acceptable reasons. Bush did that (imo), but he was wrong (imo). We also need to learned how to make sure our end game is equally moral and successful, instead of just leaving countries broken and in turmoil. Most of the Iraqis we killed were unwilling victims of Saddam even before we got all righteous and invaded. I'm saying that we can't let our failures and atrocities clud our judgment, such as in the case of purposefully designing and builiding more precise and humane weaponry.

My only point was that to think of ourselves as the "good guys" is naive.

Here's where we disagree. We make mistakes, but it shouldn't prevent us for executing war when necessary, as the good guys. And it shouldn't prevent us from developing more powerfully precise weapons. Yeah, I'd like very much to just eliminate those evil assholes who think it's good to conduct beheadings, mass murder, kidnapping of civilians--especially schoolgirls just to make their ideological/godfundie point. So what if they also believe they're the good guys? Maybe someday we can just get together at a picnic and discuss things, but not for a while.

I don't think we are really that far apart. We could quibble over how we define our "mistakes" vs their "atrocities". Or over how, in any conflict, both sides are partly right and partly wrong. And maybe I'm just sensitive to anyone calling themselves the good guys; probably goes back to being told that everything god does is good by definition because he's good, no matter how horrific it is.

I don't think W just turned out to be incorrect about Iraq; I think he and his admin knew they were full of it, and that they used our highly emotional state (I wanted to mash the fuck out of somebody too) following 9/11 to manipulate us into that war. I thought so then, and have seen no reasons to think otherwise. Saddam was clearly a bad guy, but we ignore lots of bad guys. Why him? Why then?

Some situations call for action, no doubt. I just want to be sure it's necessary for big picture, good vs evil reasons and not just a self-interest, we-want-their-oil reason. I'm doubly suspicious of the ones who use spoonfed information to convince us it's necessary. The harder they sell it, the more suspicious of their motives I become. And don't forget our country is still run by godfundies too.

Beanie:  I was just reading your profile information.  Perhaps you should compare the first three paragraphs of your About Me section to the arguments you presented in our recent exchanges.

We always need to remember two things when it comes to war technology:

If we develop a high-tech weapon, before long the enemy will develop either a similar and possibly better technology, or a way to counter it.

We also need to remember that if we do not develop a technology first, we may soon be dealing with it in the hands of an enemy.

There's no big payoff to NOT developing a war technology.

I see it as a win/lose situation all around. 

We either create something to defend us with the possibility of it causing corruption and chaos...or it could be our main defense for years upon years.

But, as you said...I am sure someone, somewhere will come up with something bigger and better.

Killer robots will be developed because we as a species are drawn to war and death. It's partly the trait that made us successful as a species. Most of our great inventions were created during periods of war.

I'm puzzled as to what purpose we will put these killer robots, seeing as we already have drone planes and guided missiles. And if these robots are operated remotely, by human or computer, we will have to protect their activities from hackers.

Actually, unlike the stories by Isaac Asimov, the existing robots we have are all potentially capable of killing. We don't have the "three laws of robotics" envisaged by Asimov, and none of the machinery we have built to date contain any kind of moral code. Effectively your OP asks whether we should use our robots for killing. Should we tailor their construct towards military useage?

Again, the scale of armaments we already have (from drone to nuclear bomb) seem to me to be far more destructive that a potentially lethally equipped robot.

Robots will make mistakes, just like people. However, unlike people they will only follow a stock program applied to all robots (of that type). People tend to make decisions individually. Each person has an individual program comprised of a combination of hard wiring and lifetime experiences and any randomness thrown in by defective physiology, cosmic rays, etc.

Robot mistakes will be things like potential targets meeting the parameters describing a target only the subject wouldn't belong to the target class (imagine an exterminating robot mistaking a small rat for a mouse, for example). And then, in the case of a war like Kosovo where combatants on both sides may have looked and dressed in very similar ways.

Robots will be unemotional but they would still be capable of error.


© 2022   Created by Rebel.   Powered by

Badges  |  Report an Issue  |  Terms of Service