When I was in college, I had a lot of computer science and psychology majors as friends and they all seemed starry-eyedly fascinated with (and, I would say, duped by) Alan Turing's famous test designed to test for machine intelligence (or artificial intelligence, as many would call it).

Putting it into the terms of our day, suppose you are in a chat room interacting with someone you've never met and the only way you can communicate is by typing text back and forth. If after a sufficient period time passes and you can't tell that you're interacting with a machine, voila! artificial intelligence!

One argument against the Turing Test is that if such a machine were invented, what's the difference between it and a really good simulation? 

Recently, the philosopher John Searle proposed The Chinese Room thought experiment. This short video explains it.

More detail: The argument and thought-experiment now generally known as the Chinese Room Argument was first published in a paper in 1980 by American philosopher John Searle (1932- ). It has become one of the best-known arguments in recent philosophy. Searle imagines himself alone in a room following a computer program for responding to Chinese characters slipped under the door. Searle understands nothing of Chinese, and yet, by following the program for manipulating symbols and numerals just as a computer does, he produces appropriate strings of Chinese characters that fool those outside into thinking there is a Chinese speaker in the room. The narrow conclusion of the argument is that programming a digital computer may make it appear to understand language but does not produce real understanding. Hence the “Turing Test” is inadequate. Searle argues that the thought experiment underscores the fact that computers merely use syntactic rules to manipulate symbol strings, but have no understanding of meaning or semantics. The broader conclusion of the argument is that the theory that human minds are computer-like computational or information processing systems is refuted. Instead minds must result from biological processes; computers can at best simulate these biological processes. Thus the argument has large implications for semantics, philosophy of language and mind, theories of consciousness, computer science and cognitive science generally. As a result, there have been many critical replies to the argument. (source)

Views: 1571

Reply to This

Replies to This Discussion

Computters already incorporate probability (randomness) - it's part of the random seed generator for secure communications. But I think this isn't what you meant?

Since humans are able to manipulate and change DNA (genetic modification) and have created artificial DNA, would an artificially created computer built on DNA be considered a machine?

Let's say a neuropsychologist invents a mind-altering drug that only (so far) he knows how it can affect a human brain. Its effects are immediate, say within a minute, and (say) can only be recognized as an obvious effect if a human taking it responds a certain way to a specific kind of emotional question. (This is actually not a far fetched scenario.)

He's allowed to ask the subject of the turing test to ingest the drug, and then ask it the question that elicits a human vs non-human response.

This becomes an issue of questions like what is human, what is animal, what is emotion, what is true empathy, and so on. We will be able to keep moving the goalposts forward wrt what these things (and "what is AI") mean for quite a while. A new question that might pop up is, what can/should we do (if anything) to prevent state-of-the-art AI posing as a real human. If/when does it become "fraud", when it can fool a large number of people, customers, or fans and followers. (There's probably more to this, but this is the first I've thought of this scenario.)

The thing is, it gets really, really complicated. Humans are shaped not just by millions of years of evolutionary biology in a world bathed with physical experiences and penalties for not adapting as well to a physical environment as well as other beings, but in childhood, the sense of reality and connection to other beings takes several years to develop and express. I feel strongly that brainy people largely underestimate the importance of those millions of years of evolution and several years of childhood to uniquely shape each human, and the geeks especially dream of just downloading and uploading all of that analog data to a sufficiently designed electronic device, a digital one, even.

When will or should such a device gain protection by the state as a living being?

I'm just saying, there will be far more complex ways to assess how human an AI machine is than a Turing test, and the questions wrt what it all means will also get increasingly complex.

Is the word "wrt" in youir last paragraph a typo? What does it mean?

wrt = with respect to

@Gary Clouse — (moving discussion back to the top level) The problem, it seems , lies in how we define intelligence.A part is most certainly found in recognizing patterns to trigger specific responses. Computer science has that down pretty well.  I think the stumbling block is this vague concept of "self awareness".

Mark Tilden is an experienced robotics developer whose designs mimic natural mechanics. many of his designs show adaptive behaviors without the need  for programming a  computer.

Mark Tilden on Artificial Life

Language is often a very fuzzy thing, and the word "artificial" is hardly a cast-in-iron word. "Artificial" comes from "artifice" which comes from "art" in art's sense as "handicraft" or "made by hand," which I take to be synonymous with "made by man."

Take an artificial leg, it is a made-by-man substitute meant to mimic a real leg. To the extent than an artificial leg functions to let a man walk, in a sense it IS a leg, and yet clearly it is also true in other ways that it IS NOT a leg. It is a limited functional substitute for a leg. If we develop the technology to make the body grow a new leg, I submit we wouldn't call the result "artificial" or even a "substitute." It would just be a repaired or fixed leg.

However, intelligence is a concept not a physical object and so just how to apply artificiality to it isn't straightforward. As a concept, intelligence needs both a definition and a metric. The definition tells you what to look for and the metric allows you to test and measure it.

I suppose a lot of people would say Watson is the smartest computer in the world right now, and yet the only skill it has, as far as I can tell, is that of absorbing, organizing, and retrieving facts as well as playing chess. Chess, BTW, is exactly the sort of game where a calculating machine could have an unfair advantage over a human.

Going back to Tilden's short video, recognizing machines like his little snakes and bugs as examples of "life" violates our current definition of life which only recognizes meat life not mineral life. A machine can meet some aspects of this biological definition, but not others:

  • an organized structure performing a specific function
  • an ability to sustain existence, e.g. by nourishment
  • an ability to respond to stimuli or to its environment
  • capable of adapting
  • an ability to germinate or reproduce (source)

Meat life can reproduceWe could call them "artificial life," which in the English language is another way of saying "Like life but not really life." I'm pretty sure "artificial intelligence" meets the same fate.

Let's think about Isaac Asimov's robot, which doesn't look human but becomes beloved by the humans who own him. Can we not love a robot without applying the term "life" to it? I think we can. If we can't, then we might end up with a slavery problem and a slave rebellion.

But even if machines became smart enough to rebel, does even THAT mean they are living entitites or just a technology that we let get out of control?

Nice article. Semantics aside, creating a machine that is conscious, is an impossibility.

This is much shorter than the reply my email showed me. 

Is it, though, impossible to create a machine that's a more than adequate SIMULATION of consciousness in much the same way that we can have pseudorandom numbers which are good enough for applications requiring randomness?

I think it comes down to definitions. If one wants to say the predetermined responses programmed into a computer can fool a user, then the conditions have to be defined. Chomsky and some other linguists have already posed language problems that can only be passed by creating novel word combinations, which computers are incapable of doing.

....novel word combinations, which computers are incapable of doing.

That strikes me as laughably false, unless it means something other than what it seems to mean. Take an Encyclopedia Brittanica-sized collection of words and apply some sort of randomizing software and you'll almost certainly come up with novel combinations.

Also, why couldn't a computer create a novel combination by creating a neologism?

It's pretty fundamental in the A.I. debate. Linguists point out that human speakers can form sentences using novel word combinations and be perfectly understood. That's a simple test of the ability to think. There is no computer program that can do it.

But "to be understood" PRESUPPOSES consciousness and self-awareness on the part of the speaker, does it not? and because of that ends up begging the question.

So, that gets us nowhere.

I'm not sure it's necessary to show the impossibility of A.I. by getting into the question of whether we presuppose consciousness or for that matter apply Kant's explanation of apriori knowledge.

It's a simple matter that you and I are capable of forming sentences with new word combinations which are understood. Computer programs cannot. It sounds like a simple task but it requires a dimension the software commands do not have, which is consciousness.

RSS

© 2018   Created by Rebel.   Powered by

Badges  |  Report an Issue  |  Terms of Service