Talk:Chinese room

From Wikipedia, the free encyclopedia
Jump to navigation Jump to search
WikiProject Philosophy (Rated B-class, High-importance)
WikiProject iconThis article is within the scope of WikiProject Philosophy, a collaborative effort to improve the coverage of content related to philosophy on Wikipedia. If you would like to support the project, please visit the project page, where you can get more details on how you can help, and where you can join the general discussion about philosophy content on Wikipedia.
B-Class article B  This article has been rated as B-Class on the project's quality scale.
 High  This article has been rated as High-importance on the project's importance scale.
 
Additional information:
 
Taskforce icon
Logic
Taskforce icon
Philosophy of mind
Taskforce icon
Analytic philosophy
Taskforce icon
Contemporary philosophy

Need to say that some people think that Searle is saying there are limits to how intelligently computers can behave[edit]

Similarly, some people also lump Searle in with Dreyfus, Penrose and others who have said that there are limits to what AI can achieve. This also will require some research, because Searle is rarely crystal clear about this. This belongs in a footnote to the section Strong AI vs. AI research. ---- CharlesGillingham (talk) 21:37, 10 February 2011 (UTC)

He seems clear enough to me: he doesn't claim that there are limits on computer behavior, only that there are limits on what can be inferred from that behavior. Looie496 (talk) 23:24, 5 April 2011 (UTC)
Yes, I think so too, but I have a strong feeling that there are some people who have written entire papers that were motivated by the assumption that Searle was saying that AI would never succeed in creating "human level intelligence". I think these papers are misguided, as I take it you do. Nevertheless, I think they exist, so we might want to mention them. ---- CharlesGillingham (talk) 08:19, 6 April 2011 (UTC)
Is this the same as asking if computers can understand, or that there are limits to their understanding? What does it mean to limit intelligence, or intelligent behaviour? Myrvin (talk) 10:16, 6 April 2011 (UTC)
There is eg. this paraphrase of Searle: "Adding a few lines of code cannot give intelligence to an unintelliget system. Therefore, we cannot hope to program a computer to exhibit understanding." Arbib & Hesse, The construction of reality p. 29. Myrvin (talk) 13:19, 6 April 2011 (UTC)
I think that, even in this quote, Searle still holds that there is a distinction between "real" intelligence and "simulated" intelligence. He accepts that "simulated" intelligence is possible. So the article always needs to make a clear distinction between intelligent behavior (which Searle thinks is possible) and "real" intelligence and understanding (which he does not think is possible).
The article covers this interpretation. The source is Russell and Norvig, the leading AI textbook.
What the article doesn't have is a source that disagrees with this interpretation: i.e. a source that thinks that Searle is saying there are limits to how much simulated intelligent behavior that a machine can demonstrate. I don't have this source, but I'm pretty sure it exists somewhere. ---- CharlesGillingham (talk) 17:32, 6 April 2011 (UTC)
Oops! I responded thinking that the quote came from Searle. Sorry if that was confusing. Perhaps Arbib & Hesse are the source I was looking for. Do they believe that Searle is saying there are limits to how intelligent a machine can behave? ---- CharlesGillingham (talk) 07:34, 7 April 2011 (UTC)
See what you think CG. It's in Google books at: [1]. Myrvin (talk) 08:27, 7 April 2011 (UTC)
Reading that quote one more time, I think that A&H do disagree with the article. They say (Searle says) a computer can't "exhibit understanding". Russell and Norvig disagree (I think). They say (Searle says) even if a computer can "exhibit" understanding, this doesn't mean that it actually understands.
With this issue, it's really difficult to tell the difference between these two positions from out-of-context quotes. If the writer isn't fully cognizant of the issue, they will tend to write sentences that can be read either way. ---- CharlesGillingham (talk) 19:27, 12 April 2011 (UTC)

neural computator[edit]

If I could produce a calculator made entirely out of human neurons (and maybe some light emitting cells to form a display), can I then finally prove humans are not intelligent :P ? Such a biological machine would clearly possess only procedural capabilities and have a formal syntactic program. That would finally explain why most people are A) not self-aware and B) not capable.

You people do realize that emergent behavior is not strictly dependent on the material composition of its components, but rather emerge from the complex network of interactions that emerge said components? Essentially the entire discussion is nothing more than a straw man. — Preceding unsigned comment added by 195.26.3.225 (talk) 13:23, 30 March 2016 (UTC)

Exactly!
Reducing to the absurd in another way, Searle's argument is like extending a neuron's lack of understanding to the whole brain. 213.149.61.141 (talk) 23:48, 27 January 2017 (UTC)
But, to respond to Searle, you have to explain exactly how this "emergent" mind "emerges". You rightly point out that there is no contradiction, but Searle's argument is not a reductio-ad-absurdum. The argument is a challenge: what aspect of the system creates a conscious "mind"? Searle says there isn't any. You can't reply with the circular argument that assumes "consciousness" can "emerge" from a system described by program on a piece of paper. ---- CharlesGillingham (talk) 21:58, 18 May 2018 (UTC)

can someone please explain, in HS english, why this isn't total nonsense?[edit]

so searle doesn't understand chinese, but the program does; what is the big deal ? that is like saying you can't do sign language cause your hands don't understand ASL I'm sorry, I'm not trying to be rude, but this seems like a total waste of time; I must be missing something — Preceding unsigned comment added by 50.245.17.105 (talk) 22:12, 1 March 2017 (UTC)

I'm afraid that you're not going to get a satisfying reply, because there isn't one. I don't remember who it was exactly, but I remember one philosopher saying that the Chinese room argument is so profoundly and intricately stupid, that the whole field of philosophy of mind was reorganized around explaining why it's so completely wrong. In my experience, most philosophers think this argument is really terrible, but there can be value in articulating why. Other than that, yeah, Searle is mostly wasting everyone's time. Also "Chinese" isn't a language, but that's beside the point.--ScreamingRobot (talk) 07:19, 19 June 2017 (UTC)
Plenty of experts consider it to be weak, giving a similar argument. Our own brain consists of "parts using parts", so to speak. I'm actually surprised how little this is represented in this article. I study Artificial Intelligence and you'd think it was already disproved after hearing all the counter-arguments. Wikipedia seems pretty out of date here. Bataaf van Oranje (Prinsgezinde) (talk) 17:59, 18 October 2017 (UTC)
I think the issue comes from the use of "understand". Take the statement "Searle can't translate words from chinese to English but the program can". As opposed to "Searle can understand the poem is beautiful but a program can't". If the poem is in chinese then English-only speakers can't really do anything with it. But, if you know chinese then you can understand what is being communicated in the poem and realize that it's beautiful. The thought exercise was initially proposed as a means of separating the syntax of words from the thought processes those words convey. Does that help? Padillah (talk) 15:06, 19 October 2017 (UTC)


Smart computer scientists will tell you the argument is irrelevant to artificial intelligence.This is not the same as saying that it has been refuted. Searle would agree that this argument doesn't try to put any limits on how intelligent or human-like machines will be in the future.
Contrary to the post above, philosophers take the argument very seriously, because it is an illustration of the hard problem of consciousness, which may the single most important problem in philosophy today. See also Mary's room for another illustration.
Re "parts using parts": the question at issue is how could a program create consciousness? What is it about the "whole" that might make it different than the parts? There is no easy answer to this, and as long as there isn't, the argument remains unrefuted. ---- CharlesGillingham (talk) 05:50, 28 December 2017 (UTC)
These are my thoughts on the matter. I think the main question is about consciousness, i.e "is the system conscious"? Now "consciousness" is not well defined and can only be understood in the intuitive sense. Is there a reason to think that a system composed of cards, a man dealing them, the rule book etc. is consciously understanding Chinese, while the parts of the system don't? Even if you apply it to directly to computers, we know computers are nothing more than high-voltage, low-voltage signaling machines - everything else one might say about computation is just interpretation. And this could be replaced by anything which were able to reliably represent two distinct values (I figure instead of electricity you could use another type of energy). Is there a reason to think this sequence of two distinct values might, as a system, become conscious?
The only thing we intuitively know is conscious, is us human beings, and for a variety of reasons we infer that this consciousness emerges from and depends on the brain. Or, you might say, the brain is conscious. Now the brain is also a heap of cells, and we can agree this heap of cells somehow becomes conscious. Seeing as this is the only conscious entity we are really sure of, it would be the safest to infer that another entity is conscious only if it contains the same causal mechanism that the brain has to create consciousness. So, It would be intellectually honest to say, that a computer (however implemented) is conscious, only if fulfills the requirement stated above. The problem is (it seems) no one knows what are those causal mechanisms. This question I think might only be answered by advances in the field of neuroscience, if it turns out that the mechanism which creates consciousness is computational, i.e that the part of the brain which creates consciousness is a computer.
Until this happens I see no honest reason to assume that computers are conscious.
So you think that the safest is the only intellectually honest thing one can argue? --ExperiencedArticleFixer (talk) 09:00, 29 August 2019 (UTC)
My personal opinion is that computers (or computer programs) are not conscious, so Searle's argument holds. At the very least, there is no reason to assume that they are.
On a different note, I think the root cause of this problem is actually the subjectivity vs. objectivity problem, but that is beyond the scope of this discussion .. Dannyfonk (talk) 09:18, 31 December 2017 (UTC)

External links modified[edit]

Hello fellow Wikipedians,

I have just modified one external link on Chinese room. Please take a moment to review my edit. If you have any questions, or need the bot to ignore the links, or the page altogether, please visit this simple FaQ for additional information. I made the following changes:

When you have finished reviewing my changes, you may follow the instructions on the template below to fix any issues with the URLs.

☑Y An editor has reviewed this edit and fixed any errors that were found.

  • If you have discovered URLs which were erroneously considered dead by the bot, you can report them with this tool.
  • If you found an error with any archives or the URLs themselves, you can fix them with this tool.

Cheers.—InternetArchiveBot (Report bug) 19:51, 11 January 2018 (UTC)