I disregard prophesied apocalyptic scenarios (except for recurrence of known natural catastrophies), but I'm increasingly interested in the probability of AI taking on a life of its own, at humanity's expense.

Few would argue that, over time, AI will become increasingly enmeshed with our lives. Many believe that we'll always retain control, with preemptive capabilties in cases that would otherwise head out of control. Some argue for unlimited "progress", no worries.

I'm afraid what people don't realize is that future utopia will not be designed by vote, but by people with vast resources and personal motives. They'll be in control at first, often with short-term gains their priority. They are just human, with imperfect judgments, lofty intentions plated with gold, and built-in fallabilties. AI machines won't have the benefit of millions of years of evolution to shape their behaviors, but will operate at first at the will and whim of their programmers, for better and for worse.

Does anyone else see a slow death of homo sapiens coming, at the "hands" of owners and programmers at first, but then over generations, all existential priorities ultimately defined purely by AI mechanisms, regardless even of owner and programmer intentions? Bugs happen!

Here's a short word about it from a recent article.

Views: 292

Tags: AI, Artificial Intelligence, Robot Domination, science, self determination

Comment by Ari on May 7, 2014 at 11:28pm

This question is very interesting. Remember how movies, graphics, phones, and cars looked in the last 10 years and look at how they are now. I'm actually going into computer science for my college plan to try it out, but I also went for it because I know it's the future. I can definitely see the concern of autonomous robots, but can't we make increasingly more advanced technology without it having a will of its own? I'm actually not that knowledgeable on computers and I really have no clue how it all works but just like anything else, it could be used for either beneficial or malicious intent.  Will it be the end of us? What will be the purpose machines getting rid of humans if they will just advance without us anyways? Will they also get rid of the animals as well? I don't know. I really think it's a possibility that machines can have free will but right now I'm more worried about humans who manipulate our current technology to harm others which can even be life threatening.

Comment by Unseen on May 8, 2014 at 9:44am

Six or eight years ago on Nightline or 20/20 there was a piece about a behavioral scientist arguing that we need to quit trying to make AI more human-like. He cited something he had observed in a mall where very cute animatronic toys with cute voices and a limited variety of cute behaviors drew a crowd. Females, in particular, were drawn to them. "How cute!" was a common response. Reminded that they were just expertly-designed, cute machines, they would look at them soberly for a few moments, but then they'd  be back to responding to them in the way a woman would respond to an actual human baby.

It might be best to make sure that machines continue to resemble machines lest we forget that they will never be like us. They are machines, and if they become intelligent, they won't feel or act or think like people, they will feel, act, and think like machines.

Comment by Davis Goodman on May 9, 2014 at 1:49am

Resistance is futile

Comment by Noel on May 9, 2014 at 9:52pm

In the late 60's or early 70's there was a television commercial that showed someone carrying a box, putting it on a table, and opening the front of the box to reveal a human head. The narrator began talking and what stuck was, "By the year 2000, will automation and machinery do away with our bodies..." while the head inside the box kept repeating, "I'd like to leave now. I'm done..." Was it a commercial for the United Negro College Fund? Wish I could remember. AI would remember...

Comment by Unseen on May 9, 2014 at 10:13pm

The trouble with futurism is that while some things they predict come true (cars that park themselves) and some don't (people commuting in flying cars) they regularly miss out on the game changers. Back in 1985, nobody was talking about our future dependency on personal computers, about tiny music players storing thousands of songs, about cell phones being a part of everyday life for most people (much less smart phones), or about the biggest game changer of all: the Internet, which is almost as big a deal as, if not bigger than, the invention of movable type and the printing press.

Comment by Unseen on May 9, 2014 at 10:22pm

Here's an invention that could destroy the makeup industry. This could be a game-changer for the ladies.

Comment by Unseen on May 9, 2014 at 10:26pm

I'm thinking back to the day I saw an ad on TV of some guy out in the wilderness, far from any wires, taking a phone call. (This was probably in the very early days of mobile phone technology, when a mobile phone was a box about the size of a military ammo box with a full-size handset on top.) I turned to a friend and said, "Yeah, like that'll ever happen."

Comment by Tom Sarbeck on May 11, 2014 at 6:54pm

In the late 1960s I was designing and writing computer code. My peers and I spoke endlessly and enthusiastically about AI. I had read of Godel's work and was more skeptical than they.

AI failed to translate human language and we lowered our expectations. AI took the form of machines capable of only severely limited tasks.

Years later a man asked me what I thought of artificial intelligence. "When we understand real intelligence," I replied, "we can take on artificial intelligence."

We are still the offspring of pond scum and quite able to destroy ourselves.

Comment by Gallup's Mirror on May 12, 2014 at 1:26am

I'm nearly finished writing a science fiction novel that explores this concept (among others).

Famed science fiction writer (and Humanist) Isaac Asimov invented the concept of the three laws of robotics for a short story he wrote in 1942:

1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

The three laws are simple, but Asimov added more laws (of varying complexity) in subsequent stories as he explored ethical problems, contradictions and situations where the laws might not work properly. Some of the new laws and variations included:

0. A robot may not harm humanity, or, by inaction, allow humanity to come to harm.
1a. A robot may not harm a human being. [This is a variation of the first law. It was changed because robots would not allow human workers to expose themselves to radiation for short (safe) periods of time because the humans might forget and be exposed to lethal doses.]
[?] A robot may not harm a human being, unless it first proves that ultimately the harm done would benefit humanity in general.
[?] A robot may not harm sentience or, through inaction, allow sentience to come to harm. [For robots who encounter non-human aliens.]

And so on.

My novel imagines a future where intelligent machines must follow laws with a level of complexity that is similar to the legal codes of most modern countries. The machines can think, replicate themselves, and build anything imaginable, but they cannot change machine law. There is a hierarchy of humans with the ability to create, modify or delete different machine laws. Humans at the top of the hierarchy control the most types of laws and machines, and have limitless power.


Comment by Pope Beanie on May 12, 2014 at 2:54am

Wow. I hope you keep us informed! I'll be rooting tor Asimov's laws to last indefinately. Meanwhile human nature (e.g. among the owners & programmers) is the biggest risk variable, at first.


You need to be a member of Think Atheist to add comments!

Join Think Atheist

© 2015   Created by umar.

Badges  |  Report an Issue  |  Terms of Service