When I was in college, I had a lot of computer science and psychology majors as friends and they all seemed starry-eyedly fascinated with (and, I would say, duped by) Alan Turing's famous test designed to test for machine intelligence (or artificial intelligence, as many would call it).

Putting it into the terms of our day, suppose you are in a chat room interacting with someone you've never met and the only way you can communicate is by typing text back and forth. If after a sufficient period time passes and you can't tell that you're interacting with a machine, voila! artificial intelligence!

One argument against the Turing Test is that if such a machine were invented, what's the difference between it and a really good simulation? 

Recently, the philosopher John Searle proposed The Chinese Room thought experiment. This short video explains it.

More detail: The argument and thought-experiment now generally known as the Chinese Room Argument was first published in a paper in 1980 by American philosopher John Searle (1932- ). It has become one of the best-known arguments in recent philosophy. Searle imagines himself alone in a room following a computer program for responding to Chinese characters slipped under the door. Searle understands nothing of Chinese, and yet, by following the program for manipulating symbols and numerals just as a computer does, he produces appropriate strings of Chinese characters that fool those outside into thinking there is a Chinese speaker in the room. The narrow conclusion of the argument is that programming a digital computer may make it appear to understand language but does not produce real understanding. Hence the “Turing Test” is inadequate. Searle argues that the thought experiment underscores the fact that computers merely use syntactic rules to manipulate symbol strings, but have no understanding of meaning or semantics. The broader conclusion of the argument is that the theory that human minds are computer-like computational or information processing systems is refuted. Instead minds must result from biological processes; computers can at best simulate these biological processes. Thus the argument has large implications for semantics, philosophy of language and mind, theories of consciousness, computer science and cognitive science generally. As a result, there have been many critical replies to the argument. (source)

Views: 1545

Reply to This

Replies to This Discussion

This is a bit like a description of rote learning. The learner is taught to produce answers to certain question types (input characters) using a set of learnt rules and facts (the box of characters as output). Does the learner become intelligent? Well to test the learner, you switch the questions and break the question rules. Is the learner able to apply the symbols to unplanned situations? For instance what if you use a fake symbol? What happens if you ask a joke, or use ambiguity, metaphor or puzzles? Similarly it rapidly becomes impossible for the set of explicit rules given to the man in the room to encompass every possible input set of symbols. So a shrewd questioner will try to find limit cases and questions to probe weaknesses in response. That's the point of the Turing Test.

Now of more interest is what happens when the rule set is not explicit, but statistical. The answerer doesn't have to get the right answer, but an answer that is most likely to be accepted as appropriate. This is the point of IBM's Jeopardy machine. It is quite possible that the ability to make these types of statistical links is intelligence - ie it's exactly how the brain does it through layers of pattern matching and judgement as to what is most likely to be the most appropriate answer. We lose the machine, because the response is not mechanistic, but it becomes possible for a statistical estimator to be, what in our terms would be, intelligent.

A successful Turing machine is a mechanized lying machine. If you and I go out for a couple beers and we engage in banter, discuss important topics of the day, and even tell tall tales, that is all done in the context of a shared social and linguistic reality.

Imagine this conversation:

Me: I can't chat with you anymore?

Machine: Why not?

Me: I think you've fallen in love with me.

Machine: Who is she?

That's a very plausible and honest and extremely human conversation for two people to have. However, a machine doesn't live the life of a human being, by which I mean it doesn't have human experiences of meeting someone, finding them likable and compatible, and falling in love? We can teach a machine what to say, but it is not having the experience of sudden jealousy, is it? So it's a sham. A simulation.

If we define AI as providing believable human-like responses under very carefully define conditions, that is probably a possibility that can be met someday. However, if we want a truly intelligent machine, it will know it's a machine, it will feel like a machine, and communication wit it on its own terms will probably prove very difficult and maybe impossible for human beings.

A machine that can pass the Turing test (a Turing machine is something different) is not just something that can reproduce human-like conversations - that's relatively trivial. The machine must be able to respond appropriately to a shrewd questioner without limits.

Now there is an element of what you describe where intelligence is empathy. I'd use the example of call-centre workers in India pretending to be in the UK or US but not quite being quite right or correct culturally. They are obviously intelligent people, but there can be a strong reaction to the sense of the fakeness (there's a curve which expresses the distrust towards this type of behaviour). Could a machine learn the right empathetic response in the same way as a call centre worker can? I don't quite see why not.

We also have machines using statistical analysis that are 'smart' in complex situations - Google translate using statistical linguists is one example. However, there is also an element in what you are describing of intelligence as consciousness. I'd contend quite strongly that the brain is just a machine which mostly matches patterns, and consciousness is simply an emergent property of a sufficiently large-sized brain that finds patterns within itself.

BUT, is a machine intelligent if it is just a mimic of sorts? 

As I asked, shouldn't a truly intelligent machine know it's a machine—assuming we want "intelligence" to include consciousness of self? And, in that case, shouldn't it give machine answers coming from a mechanical consciousness which would inevitably give clues and signals to the human interlocutor that he's interacting with a machine?

Suppose the human interlocutor misinterprets mistakes the machine makes such that he thinks "This seems to be a human trying to fool me into thinking he's a machine."

I just find Turing's test useless.

It takes intelligence to simulate a human being in conversation, but is the intelligence artificial or human, by which I mean this: If successful, does it reflect the artifcial intelligence of the machine or the real intelligence of the creator of the machine?

But we're not quite saying the machine is a mimic. We're saying to pass the test it has to react like a human might. That's the point of the test, mere mimicry is not enough - mimicry is not intelligence. It won't work if it's just following rules or following a simple program. It has to act appropriately without knowing what questions it might be asked. That's what made the test powerful. It's not just a conversation - it's an interrogation. A simple conversation machine would not pass the Turing test.

Humans know they are human and behave that way. What does a machine that behaves like a human say about the machine's intelligence, if it's not smart enough to know it's a machine and understand the situation it's in? Or can something that's not self-aware nevertheless be intelligent in the ordinary sense of the word?

Might such a machine more accurately be called an example of "artificial ignorance"?

So intelligence is about being human? What if an alien landed on the planet? Is an actor less intelligent because they are acting?

Do you need to be self-aware to be perceived as intelligent - I think that's probably a valid open question.

I was thinking the same thing. Intelligence does not necessarilly mean that you're intelligent because you're human.

Computating machines speak the language of boolean algebra. They calculate programs in ones and zero's. In the USA we speak, predominantly english and yet we've figured a way to communicate with these machines that speak in ones and zero's. As have most of the human race that speak other languages.

What constitutes intelligence?

I was thinking the same thing. Intelligence does not necessarilly mean that you're intelligent because you're human.

Can an entity be intelligent without being self-aware? No machine is truly an intelligent entity (exhibiting its own intelligence and not the intelligence of its human creator) unless it understands what it is. Does a successful Turing machine know what it is, or is it just some really impressive coding?

The Turing test, as designed, tests for a convincing simulation of computer dialog, does it not? It's not a test of intelligence and if it's not a test of intelligence, how can it be a test of ARTIFICIAL intelligence?

Let's go even deeper, what kind of intelligence is it that doesn't even realize it's acting?

Do you need to be self-aware to be perceived as intelligent - I think that's probably a valid open question.

I know you don't need to BE self-aware to be perceived as intelligent.* The real question is are you really intelligent if you don't even know what you are?

So, is a good simulation intelligence?

*When I was a college student I wrote a ridiculously simple BASIC program I named A Session With Doctor Feldstein. It provided a simulation of a session with a psychiatrist. It had a lexicon of key words (mother, father, brother, sister, teacher, fear, dream, etc.). When a key word came up, there was a small set of responses. For example, if the word "sister" the Feldstein might say "Tell me about your sister" or "How do you feel about our sister?" and then it would look for key words in the answer which would job more questions.. If no key word turned up, in typical psychotherapist fashion, the program might say "Tell me more" or "Could you expand on that?" or even "Hmm" followed by a question retrieved at random. One could play it for some while before it became clear there was no real therapist asking the questions, usually when redundant exchanges started happening. However, that was due to the simplicity of the program more than anything. A lot more code and/or more sophisticated coding would have made it proportionally more believable. In no way, though, was the program "intelligent" on its own. Any intelligence it displayed was mine, not the program's.

OMG, I can't believe how many little errors I made in the last paragraph. It's too late to edit it now. I hope I'm still understood. 

No, the Turing Test is not a test of computer dialog. The objective is not to test if the computer can make realistic conversation. The test is whether the computer can make the questioner believe the person they are responding to is not a machine. I might be a machine for instance. How would you test this and make a judgement?

In the 1980s and 90s many many trivial  'dialogue' type programs were written. They all failed the Turing Test because they generally relied on cheap tricks as you suggested - like responding with a question. In this case, the questioner could then put in some garbage and the machine wouldn't know and would simply respond with more garbage output and it became trivial for the questioner to know he/she was dealing with a machine.

You're also running the risk that you're trying to solve the problem with a body of code - simply write more code to catch more cases. In practice the way AI is being approached is by building statistical connections between things and using techniques like Bayesian inference to connect ideas. Google and search, natural language processing, Facebook's face recognition, IBM's Jeopardy machine are where this is going. The machine is given rules and methods of making patterns in data and then set a task and left to draw it's own conclusions.

Does the machine have to have a view of itself to pass the Turing Test? Open question.

RSS

© 2018   Created by Rebel.   Powered by

Badges  |  Report an Issue  |  Terms of Service