Tom at Mindhacks mentioned the Chinese room argument today. He says that he found the argument “confused, and ultimately frustrating”. I can empathise. I suspect I’ve missed something given the volume of ink that’s been spilled on the subject, but here’s a paragraph or two of a “so what”.
To me the man in the room with the books is analogous to a subpersonal brain process. We’d never argue about whether a set of neurons in our brains “understands” anything, though we may, in the midst of an explanation, use the analogy of “understands”, as we do with “decides”, just to enable a rapid understanding of a concept. So it’s fine that the bloke with the rulebook doesn’t understand. He doesn’t have his bloke hat on now, he’s pretending to be a subbloke.
Is the room then a person? It’s difficult to imagine talking to a room, so let’s wade across to strong AI land. Trying to empathise with a counterfactual me who speaks Chinese, who’s in the future, and is talking to a computer in Chinese, I’m fairly sure that if the computer made me giggle occasionally and didn’t loop every few minutes then I’d say it did indeed understand Chinese.
Does it have a mind? Dunno.
Suspect I wouldn’t have got this commentary added to the original BBS article…