I disregard prophesied apocalyptic scenarios (except for recurrence of known natural catastrophies), but I'm increasingly interested in the probability of AI taking on a life of its own, at humanity's expense.

Few would argue that, over time, AI will become increasingly enmeshed with our lives. Many believe that we'll always retain control, with preemptive capabilties in cases that would otherwise head out of control. Some argue for unlimited "progress", no worries.

I'm afraid what people don't realize is that future utopia will not be designed by vote, but by people with vast resources and personal motives. They'll be in control at first, often with short-term gains their priority. They are just human, with imperfect judgments, lofty intentions plated with gold, and built-in fallabilties. AI machines won't have the benefit of millions of years of evolution to shape their behaviors, but will operate at first at the will and whim of their programmers, for better and for worse.

Does anyone else see a slow death of homo sapiens coming, at the "hands" of owners and programmers at first, but then over generations, all existential priorities ultimately defined purely by AI mechanisms, regardless even of owner and programmer intentions? Bugs happen!

Here's a short word about it from a recent article.

Views: 402

Comment by Unseen on May 8, 2014 at 9:44am

Six or eight years ago on Nightline or 20/20 there was a piece about a behavioral scientist arguing that we need to quit trying to make AI more human-like. He cited something he had observed in a mall where very cute animatronic toys with cute voices and a limited variety of cute behaviors drew a crowd. Females, in particular, were drawn to them. "How cute!" was a common response. Reminded that they were just expertly-designed, cute machines, they would look at them soberly for a few moments, but then they'd  be back to responding to them in the way a woman would respond to an actual human baby.

It might be best to make sure that machines continue to resemble machines lest we forget that they will never be like us. They are machines, and if they become intelligent, they won't feel or act or think like people, they will feel, act, and think like machines.

Comment by Davis Goodman on May 9, 2014 at 1:49am

Resistance is futile

Comment by Noel on May 9, 2014 at 9:52pm

In the late 60's or early 70's there was a television commercial that showed someone carrying a box, putting it on a table, and opening the front of the box to reveal a human head. The narrator began talking and what stuck was, "By the year 2000, will automation and machinery do away with our bodies..." while the head inside the box kept repeating, "I'd like to leave now. I'm done..." Was it a commercial for the United Negro College Fund? Wish I could remember. AI would remember...

Comment by Unseen on May 9, 2014 at 10:13pm

The trouble with futurism is that while some things they predict come true (cars that park themselves) and some don't (people commuting in flying cars) they regularly miss out on the game changers. Back in 1985, nobody was talking about our future dependency on personal computers, about tiny music players storing thousands of songs, about cell phones being a part of everyday life for most people (much less smart phones), or about the biggest game changer of all: the Internet, which is almost as big a deal as, if not bigger than, the invention of movable type and the printing press.

Comment by Unseen on May 9, 2014 at 10:22pm

Here's an invention that could destroy the makeup industry. This could be a game-changer for the ladies.

Comment by Unseen on May 9, 2014 at 10:26pm

I'm thinking back to the day I saw an ad on TV of some guy out in the wilderness, far from any wires, taking a phone call. (This was probably in the very early days of mobile phone technology, when a mobile phone was a box about the size of a military ammo box with a full-size handset on top.) I turned to a friend and said, "Yeah, like that'll ever happen."

Comment by Tom Sarbeck on May 11, 2014 at 6:54pm

In the late 1960s I was designing and writing computer code. My peers and I spoke endlessly and enthusiastically about AI. I had read of Godel's work and was more skeptical than they.

AI failed to translate human language and we lowered our expectations. AI took the form of machines capable of only severely limited tasks.

Years later a man asked me what I thought of artificial intelligence. "When we understand real intelligence," I replied, "we can take on artificial intelligence."

We are still the offspring of pond scum and quite able to destroy ourselves.

Comment by Pope Beanie on May 12, 2014 at 2:54am

Wow. I hope you keep us informed! I'll be rooting tor Asimov's laws to last indefinately. Meanwhile human nature (e.g. among the owners & programmers) is the biggest risk variable, at first.

Comment by Tom Sarbeck on May 12, 2014 at 5:31am

GM, suppose one of your robots has a flaw that looks like a lack of empathy -- like a sociopath -- and your other robots have to find the flaw and choose what to do.

Comment by Noel on May 12, 2014 at 7:12am

When I served in the U.S. Navy, aboard an aircraft carrier, our squadrons of A-7 Corsairs could be programmed to fly to their destinations, drop their ordinance, and fly back to the carrier; where the pilot would not have any interaction with the aircraft and perform a "hands free" landing. Saw one pilot do that btw. Had his hands pressed to the canopy as his plane slammed onto the deck and caught the arresting cable.

This was 1978! I think our iPhones have more processing power than the computers in those aircraft and yet programmers didn't give a shit; they programmed those things to land an airplane on a flight deck of an aircraft carrier.


You need to be a member of Think Atheist to add comments!

Join Think Atheist

© 2021   Created by Rebel.   Powered by

Badges  |  Report an Issue  |  Terms of Service