When I was in college, I had a lot of computer science and psychology majors as friends and they all seemed starry-eyedly fascinated with (and, I would say, duped by) Alan Turing's famous test designed to test for machine intelligence (or artificial intelligence, as many would call it).

Putting it into the terms of our day, suppose you are in a chat room interacting with someone you've never met and the only way you can communicate is by typing text back and forth. If after a sufficient period time passes and you can't tell that you're interacting with a machine, voila! artificial intelligence!

One argument against the Turing Test is that if such a machine were invented, what's the difference between it and a really good simulation? 

Recently, the philosopher John Searle proposed The Chinese Room thought experiment. This short video explains it.

More detail: The argument and thought-experiment now generally known as the Chinese Room Argument was first published in a paper in 1980 by American philosopher John Searle (1932- ). It has become one of the best-known arguments in recent philosophy. Searle imagines himself alone in a room following a computer program for responding to Chinese characters slipped under the door. Searle understands nothing of Chinese, and yet, by following the program for manipulating symbols and numerals just as a computer does, he produces appropriate strings of Chinese characters that fool those outside into thinking there is a Chinese speaker in the room. The narrow conclusion of the argument is that programming a digital computer may make it appear to understand language but does not produce real understanding. Hence the “Turing Test” is inadequate. Searle argues that the thought experiment underscores the fact that computers merely use syntactic rules to manipulate symbol strings, but have no understanding of meaning or semantics. The broader conclusion of the argument is that the theory that human minds are computer-like computational or information processing systems is refuted. Instead minds must result from biological processes; computers can at best simulate these biological processes. Thus the argument has large implications for semantics, philosophy of language and mind, theories of consciousness, computer science and cognitive science generally. As a result, there have been many critical replies to the argument. (source)

Views: 1732

Reply to This

Replies to This Discussion

Unseen, most determinists would not consider a malfunction an exception, but but the culmination of prior events and outcomes

When I speak of a malfunction, I mean an exception or interruption to gross-level Newtonian/Einsteinian determinism due to something happening on the quantum level. 

Of course, everything that happens on the gross level is totally driven deterministically by antecedent conditions, but a subatomic event can intrude on the gross level breaking the normal deterministic causal chain.

So, gross level determinism continues driven by antecedent events, but something outside (underneath) that level can intrude and create exceptions. The gross level, however, only knows one way to respond: deterministically.

It's the theists and dualists who think we are more than this. I think the ball is in their court to show how.

I have a problem with this. (Can you fix me?!)

If we're no more than machines ourselves, what gives any of us the right to exist or not just be "turned off" by another's whim, other than tradition and/or law? I mean, this is an extreme example, but what is it that certifies any human being (and pets and research animals and... fill in the blank) as having or deserving the right to live well and prosper (as Spock would say)?

Maybe the root of my problem is where humans and machines may eventually converge characteristics to the point where today's human (and jury member or court judge) wouldn't be able to tell the difference. This philosophical issue could someday become confused by such realities, or should I say, confused and complicated by specious human and/or AI motivations?

The obvious answer is 'Society' and socially learned morality - ie not simply turning people off (like has been done countless times through history) has a better societal outcome than murder, death and genocide.

We also have a more complex relationship with things. If someone suggested 'turning off' the Mona Lisa (ie destroying it). We would be horrified. In other words we are capable of extending moral norms to things as much as people. However, the big difference between a machine and a person, is that we can make a copy of the machine - if we have a verbatim copy, do we worry about turning the original off?

The obvious answer is 'Society' and socially learned morality - ie not simply turning people off (like has been done countless times through history) has a better societal outcome than murder, death and genocide.

You are still talking as though whatever we end up doing, we could have controlled ourselves to do otherwise.

We also have a more complex relationship with things. If someone suggested 'turning off' the Mona Lisa (ie destroying it). We would be horrified. In other words we are capable of extending moral norms to things as much as people. However, the big difference between a machine and a person, is that we can make a copy of the machine - if we have a verbatim copy, do we worry about turning the original off?

But is any of that actually truly, at the deepest levels, volitional? If one is a determinist, it seem to me, volition becomes a myth. Whatever one does is due to the antecedent circumstances.

But is any of that actually truly, at the deepest levels, volitional? If one is a determinist, it seem to me, volition becomes a myth. Whatever one does is due to the antecedent circumstances.

I think we can agree you won't find volition at deep levels. Why would you be trying to find it there as opposed to say, an emergent level?

People are meat computers, and we are still fabulously more complex and powerful than mineral computers. The mineral computer uses on and off (1 and 0) to do calculations and neurons function similarly but in a much more chaotic way that, it turns out, is also much more powerful.

If we're no more than machines ourselves, what gives any of us the right to exist or not just be "turned off" by another's whim, other than tradition and/or law?

There are only two kinds of rights: legislated and imaginary.

There are only two kinds of rights: legislated and imaginary.

Yeah, I stepped right into that one. I'll backpeddle on it until I can properly communicate what I think I'm thinking. Having scrambled brains for lunch right now... might help.

as having or deserving the right to live well and prosper

The only rights are divine or by law. Since we are athiests, I'm sure you can guess which one of those we can safely ignore.

True.

From Saul:

And it's quite possible that we really are just a machine. One put together by DNA. But a mere machine in reality. It's the theists and dualists who think we are more than this. I think the ball is in their court to show how.

I'm recently leaning more toward discovering and explaining what makes us "more than this" with an atheist's/non-dualist's perspective. After evolving naturally over billions of years, we're now fighting the naive and stalwart insistence that some kind of divine, perfect consciousness has designed us, and in some cases people even believe our meat bodies deserve no special time on earth before reaching heaven or hell. We may someday be threatened by people who believe that machines themselves can have "souls", especially (say) if they've been baptised, or can instantly recite and translate volumes of holy text. And/or what's to stop machines one day from designing machines that believe in Jihad more than they believe in natural history?

Meanwhile (and therefore?), isn't discussion about determinism pretty irrelevant to what separates us from intelligently designed machines? Considering all the things that have or could have challenged the evolution of life over the past billions of years, we should at least recognize how "special" we are in that regard, and not automatically assume that any machine we make deserves our divine blessing of human rights... especially if (say) we're running a huge, international corporation that produces them.

We are analog computers, our software is our DNA and the millions of years it took to come up with our current version. We are primarily programmed to survive so we can reproduce. Like digital computers, our processing and input/output systems run on electricity and require power to operate. Like digital computers we can break down, and we both age with time.

Self awareness became one of our rather unique function calls, but it is certainly not required for biological processors, and may even prove to be fatal.

But DNA is DNA and anything else is not So yeah there is nothing stopping a digital processor from becoming self aware or to appear to do anything we do, given enough time to teach it, but it's still not a DNA based machine. It will always be different. To even approach humanness, a computer would also have to be able to add probability (randomness) to it's reactions so that it approximates the range of reactions a human may have to the same inputs presented over and over. Our brain is subject to so many tiny variables, it's like predicting the weather this same day next year, if the stimulus is at all subtle.

RSS

© 2021   Created by Rebel.   Powered by

Badges  |  Report an Issue  |  Terms of Service