A United Nations conference is seeking to ban autonomous killing machines. Basically, this refers to killer robots that make their own battlefield decisions, which would make war absolutely impersonal. The idea is that if someone is going to be killed, it should always ultimately be a human decision, not one made by a CPU.
If the past is to be a guide, just about every technology with lethal possibilities has been developed, not necessarily to be better than the enemy but to be on a par with them.
Take a look at the following article, and then what are your thoughts?
WHY THE UNITED NATIONS IS TALKING ABOUT KILLER ROBOTS
May 13, 2014
By ALYSSA NEWCOMB
Is it time to stop the Terminator in its tracks?
Some of the best and brightest leaders are meeting for a United Nations conference in Geneva, Switzerland, today to discuss what future threat killer robots could pose to the world, just like the part-man, part-machine cyborg that Arnold Schwarzenegger played in the Terminator film series.
Killer robots, or "lethal autonomous weapons systems" (LAWS) are machines that would be able to select their targets without direct human mediation. They don't fully exist yet, however the dystopian idea has led to the first-ever meeting on the issue.
"I urge delegates to take bold action," Michael Møller, acting director-general of the United Nations office in Geneva, told attendees, according to a United Nations statement. "All too often international law only responds to atrocities and suffering once it has happened. You have the opportunity to take pre-emptive action and ensure that the ultimate decision to end life remains firmly under human control."
Among the issues that will be addressed at the meeting are what levels of autonomy and predictability exist in robots and a future look at the next steps in robotic technology, according to an agenda.
A Human Rights Watch report issued on the eve of the meeting said the fully autonomous weapons systems could also "undermine human dignity." In 2010, South Korean officials announced the installation of several semi-autonomous robotic machine guns along its border with North Korea.
The Campaign to Stop Killer Robots, which describes itself as an international coalition of non-governmental organizations working to ban fully autonomous weapons, live tweeted some of the discussion today in Geneva, where a slew of government representatives shared their thoughts and concerns.
Ronald Arkin, a roboticist at the Georgia Institute of Technology, said he supports the "call for a moratorium" on the weapons, but told the attendees today he believes a ban would be premature, according to tweets about his presentation.
"It is not my belief that an unmanned system will be able to be perfectly ethical in the battlefield," Arkin said in 2007, according to the Washington Post. "But I am convinced that they can perform more ethically than human soldiers."
Later this year, the group plans to reconvene to discuss what action, if any, should be taken against the robots ... or if we're safe from them taking over the world, for now.
How does it work? Does one declare war, tell the enemy "en garde!", and then attack? Or does one declare war by attacking? If it's the latter, every war is started by a brutal peacetime sneak attack.
Some suggest that Washington knew in advance of the attack on Pearl Harbor but kept that information from commanders on the ground. This was done, some suggest, so the attack would be a surprise and it's effect would be more profound, giving Washington the impetus it was looking for to attack Japan. I think about 80% of Americans were against entering the war before Pearl Harbor. Perhaps that's all conspiracy theorist claptrap. But it sounds remarkably similar to conspiracy theories surrounding 9/11 and Iraq. Could history be repeating itself?
That's pretty black and white thinking.
Pretty much. It's an essential rule not to wage war unless we think we're the good guys, defending ourselves from bad guys. I only argued for using more precise weaponry, pointing out how traditional weaponry is usually more destructive than necessary. I'll even argue that precision battle can provide more humane results than traditional weaponry even when we're the bad guys. The real solution to your black and white scenarios of moralism is ths: If and when we're the bad guys, we should not be waging war with any kind of weaponry.
The internets crashed before I could finish writing.
I assume that your answer to my question about Al Qaeda and Boko Haram is that we shouldn't even bother them, because we're always just bad guys, relative to their cuddly goodness. That was the point you were trying to make, bringing up our use of atomic bombs and such, right? However, I still say (while remaining on topic), maybe a few intelligent robots and drones could have administered a kind of shock and awe to a few critical Japanese people, without devastating populations, Bush style.
WTF? Did you even read what I said?
Yes, I did. In case I wasn't clear enough, my point was that the black and white thinking (which you accused me of first) is necessary, at least in considering right vs wrong, good vs evil, and so on. I think it's wrong to make war unless we've seriously considered that we're doing it for morally acceptable reasons. Bush did that (imo), but he was wrong (imo). We also need to learned how to make sure our end game is equally moral and successful, instead of just leaving countries broken and in turmoil. Most of the Iraqis we killed were unwilling victims of Saddam even before we got all righteous and invaded. I'm saying that we can't let our failures and atrocities clud our judgment, such as in the case of purposefully designing and builiding more precise and humane weaponry.
My only point was that to think of ourselves as the "good guys" is naive.
Here's where we disagree. We make mistakes, but it shouldn't prevent us for executing war when necessary, as the good guys. And it shouldn't prevent us from developing more powerfully precise weapons. Yeah, I'd like very much to just eliminate those evil assholes who think it's good to conduct beheadings, mass murder, kidnapping of civilians--especially schoolgirls just to make their ideological/godfundie point. So what if they also believe they're the good guys? Maybe someday we can just get together at a picnic and discuss things, but not for a while.
Beanie: I was just reading your profile information. Perhaps you should compare the first three paragraphs of your About Me section to the arguments you presented in our recent exchanges.
We always need to remember two things when it comes to war technology:
If we develop a high-tech weapon, before long the enemy will develop either a similar and possibly better technology, or a way to counter it.
We also need to remember that if we do not develop a technology first, we may soon be dealing with it in the hands of an enemy.
There's no big payoff to NOT developing a war technology.
I see it as a win/lose situation all around.
We either create something to defend us with the possibility of it causing corruption and chaos...or it could be our main defense for years upon years.
But, as you said...I am sure someone, somewhere will come up with something bigger and better.
Robots will make mistakes, just like people. However, unlike people they will only follow a stock program applied to all robots (of that type). People tend to make decisions individually. Each person has an individual program comprised of a combination of hard wiring and lifetime experiences and any randomness thrown in by defective physiology, cosmic rays, etc.
Robot mistakes will be things like potential targets meeting the parameters describing a target only the subject wouldn't belong to the target class (imagine an exterminating robot mistaking a small rat for a mouse, for example). And then, in the case of a war like Kosovo where combatants on both sides may have looked and dressed in very similar ways.
Robots will be unemotional but they would still be capable of error.