When I was in college, I had a lot of computer science and psychology majors as friends and they all seemed starry-eyedly fascinated with (and, I would say, duped by) Alan Turing's famous test designed to test for machine intelligence (or artificial intelligence, as many would call it).

Putting it into the terms of our day, suppose you are in a chat room interacting with someone you've never met and the only way you can communicate is by typing text back and forth. If after a sufficient period time passes and you can't tell that you're interacting with a machine, voila! artificial intelligence!

One argument against the Turing Test is that if such a machine were invented, what's the difference between it and a really good simulation? 

Recently, the philosopher John Searle proposed The Chinese Room thought experiment. This short video explains it.

More detail: The argument and thought-experiment now generally known as the Chinese Room Argument was first published in a paper in 1980 by American philosopher John Searle (1932- ). It has become one of the best-known arguments in recent philosophy. Searle imagines himself alone in a room following a computer program for responding to Chinese characters slipped under the door. Searle understands nothing of Chinese, and yet, by following the program for manipulating symbols and numerals just as a computer does, he produces appropriate strings of Chinese characters that fool those outside into thinking there is a Chinese speaker in the room. The narrow conclusion of the argument is that programming a digital computer may make it appear to understand language but does not produce real understanding. Hence the “Turing Test” is inadequate. Searle argues that the thought experiment underscores the fact that computers merely use syntactic rules to manipulate symbol strings, but have no understanding of meaning or semantics. The broader conclusion of the argument is that the theory that human minds are computer-like computational or information processing systems is refuted. Instead minds must result from biological processes; computers can at best simulate these biological processes. Thus the argument has large implications for semantics, philosophy of language and mind, theories of consciousness, computer science and cognitive science generally. As a result, there have been many critical replies to the argument. (source)

Views: 1731

Reply to This

Replies to This Discussion

I would say that "mind" and "intelligence" are either synonyms or are so overlapping in meaning as to be hard to distinguish from each other. Animals have intelligence to varying degrees.

Crows are extremely bright and can solve problems and recognize the usefulness of objects in their environment as tools. They can even see that a problem can be solved in several steps and pursue those steps to the solution. 

There have been parrots who can form original and meaningful sentences, distinguish between colors and shapes, etc.

Cats even more than dogs are pretty good at solving cognitive problems (dogs have evolved to turn to humans once they run into a problem, not so cats, so maybe dogs are smart enough to ask a human for help).

Let's not even get into our cetacean friends (dolphins, whales).

My point is that if animals demonstrate intelligence, there is some sort of mind behind it.

 The examples you gave show intelligent problem solving behaviors, and doesn't really define the nature of the mind is or how it operates, but within the examples of animal intelligence I noticed a commonality.The examples of animal intelligence demonstrate creativity and self direction.

That's nonsense. Just ask any cat or dog owner.

Yet another confused description of consciousness in a machine.

It seems fairly simple to delineate the two: predictive programming and consciousness are not the same, not even close. If I program a light detector switch to turn my living room lights on when the sunlight in the room drops to certain level, the light detector did not turn the lights on... I did.

  Does that imply training you are turning your light off if you train your cat to do it as well?

I wish I could understand what you are asking there.

 OOps... One of the hazards of typing with peripheral parasthesia. People and animals can be programmed for predictive behaviors through the application of adaptive behavioral modification. In fact, most human behavior is predictive. 

Gary, I think the word "predictive" as you use it is a bit untidy. A computer program, like Deep Blue, is predictive based on billions of moves programmed into it. The machine could analyze 200 million possible positions per second and could look at billions each move. 

Kasparov, on the other hand, looked about 3 to 4 moves ahead analyzing about 50 positions.

So compare, fifty to a billion. The first is a human playing chess with intuition. The second is a computer executing if/then commands based on essentially every move possible. Words like "programmed" and "predictive" mean different things when applied to computers and humans.

You're right about Deep Blue, but things move on eg

http://www.technologyreview.com/view/541276/deep-learning-machine-t...

- a neural-networked based chess computer...

This is merely an example of machine learning. Netflix's recommendation engine operates on a similar principle. The move the machine makes is still already programmed into it whether a hard command or soft. In other words, the machine would make the exact same moves (unless it was programmed to randomize and not replicate a game) against the same moves regardless of whether it lost the previous game, which is an indication it's not "learning" anything. It merely uses a statistic to write another command.

If I flip a coin to decide on chess moves, is the coin playing chess? I would submit this computer isn't playing chess either, the programmer is.

I like that - coin playing chess :)

If I flip a coin to decide on chess moves, is the coin playing chess? I would submit this computer isn't playing chess either, the programmer is.

The phrase "is the coin playing chess" is ambiguous. It could mean (reworded) "is the tossing of the coin to decide a move still chess?" or (reworded) "is the object we call a coin a chess player when tossing it is used to determine the moves?"

I think I know which you mean, but please be clear.

RSS

© 2021   Created by Rebel.   Powered by

Badges  |  Report an Issue  |  Terms of Service