<< Previous Entry Next Entry >>
Journal Entry

Friday, September 29, 2006

on "On certain philosophical arguments against machine consciousness"

Dave Moles has transferred his blog to a new server, so I can't comment on this post over there any more. So it'll have to be here.

While I like Aaronson's post on Alan Turing as a moralist, and I do think there's a beauty in his view of the Turing test, I'm somewhat more sympathetic to what I think Halpern may be saying, or at least what I think Halpern should be saying; I think "computers will never think" is not a nonsensical, nor necessarily a bigoted, position, and I think the Turing test, as usually understood, is not necessarily really equivalent to the principle of judging people on their behavior.


My own money is on "strong AI, but no time soon". I would be surprised, but not incredulous, if we had computers that would regularly impress us as being "smart like people" on Vinge's timeline of 2030 or so; I would also be surprised, but not incredulous, if after thousands of years of effort the problem of machine intelligence turned out to be intractible. Or uninteresting -- fundamentally uninteresting, i.e. not on any chain of possible historical-technological progression.

It's clear that thought can be a property of a physical system, since we have one physical system -- us -- that it's a property of. Thus it seems obvious to me that it's possible *in principle* to build such a physical system. I can't credit a position which says "only God can make a thinking physical system, and it has to look just like us".

But that's a pretty big "in principle". I can credit a position that says "you could build an intelligence, but AI as we know it is barking up the wrong tree".

Let's say you are hanging around Europe in 1500 or so and you meet a clockmaking fan. He's just seen the automata who parade around the Zytglogge in central Bern on the hour, and he says to you excitedly, "one day clockwork men such as these will be able to draw a thousand men's portraits in the blink of an eye, and solve algebraic theorems, and if you name any phrase, they will give you a list of every book it was ever writ in!"

Leaving aside the fact that he's clearly nuts -- would he be right?

Sort of depends what you mean by "clockwork men", doesn't it?

The Vinge/Kurzweil 2030-or-so deadline for strong AI is based on the notion that the critical issue is processing power -- that once your computer is doing as many operations per second as a human brain -- or a million times as many, or whatever -- it should be relatively straightforward to get it to "think", whatever thinking is. As a software guy, I feel like this is something like saying, "well, we have enough paint, so now getting the Mona Lisa should be trivial."

The issue isn't computers being "as smart as" us. "Smart" is not really linear that way. If you had to pick some way of comparing totally heterogenous creatures on "smart", processing power is a reasonable metric. If you have to decide whether frogs are "smarter" than octopuses or visa versa, processing power is as good a method as any.

So in those terms, all you need to know when computers are going to be "as smart as" us is Moore's Law.

But octopuses are not going to be passing the frog equivalent of a Turing test any time soon -- and nor are we, for all that we are "smarter". The Turing test is, in fact, a good measure of what people intuitively mean by "artificial intelligence" precisely because it doesn't measure when computers are "as smart as" us, but rather when they are "smart like" us.

Or when they can pretend they are.

As cute as I find this


There’s a story that A. Lawrence Lowell, the president of Harvard in the 1920’s, wanted to impose a Jew quota because “Jews cheat.” When someone pointed out that non-Jews also cheat, Lowell replied: “You’re changing the subject. We’re talking about Jews.” Likewise, when one asks the strong-AI skeptic how a grayish-white clump of meat can think, the response often boils down to: “You’re changing the subject. We’re talking about computers.”

it's disingenous for two reasons. First, you can contradict Lowell by pointing to extant honest Jews. But you can't contradict Halpern by pointing to extant real thinking computers (you have to point to posited future thinking computers, and that doesn't refute his point that it may be a waste of a great deal of time and money looking for them). And second, whatever Lowell thought, Jews in fact have enormous morphological similarities with other Harvard students. We can extrapolate confidently from what we know about honest Gentile Harvard students to predict things about honest Jewish Harvard students, on the basis of this morphological similarity.

Let's say that I am a proponent of Teapot AI, which is the theory that under the proper heat and pressure conditions, the steam inside an ordinary kitchen teapot will spontaneously self-organize into an intelligent system which will know English and be able to talk, out of its whistle, in a way that would convince an interlocutor that the teapot was a well-educated male mid-thirties upper middle class American, where it not for the voice being so whistly. And I am speaking with a skeptic of Teapot AI.

Me: Teapots will think!

Teapot AI Skeptic: No they won't.

Me: Yes they will!

T.A.I.S.: That makes no sense. Teapots are nothing like people. Steam can't think.

Me: Bigot! How is it that a grayish-white clump of meat can think, then?

T.A.I.S.: I have no idea. But it certainly appears that it can.

Me: But if I could, theoretically, get a teapot to pass the Turing test, would you agree that it could think?

T.A.I.S.: Um. I think it would be more likely that it was a trick, actually.

Me: Why would you be more skeptical about a teapot thinking, if it appeared to think, than you would be about another human thinking? Are you a solipsist?

T.A.I.S.: No, but I have two reasons to believe that another human thinks; one, that the human behaves like someone who thinks, and two, that the human is extremely similar to me in design, and this morphological similarlity makes it very likely that similar external behavior is produced by a similar internal process. And what I mean by "think" is not a set of behaviors, but an internal process.

Me: Why do you care what internal process is used? If the outward behavior were the same?

T.A.I.S.: I guess it partly depends how much behavior we're talking about. If I were to come to know and love the teapot after living with it for many years, I would probably come around to the conclusion that it "thought". But if you're talking about the teapot showing up in a chat room and fooling me for a half an hour into thinking it was a person? It would still be more likely that it was a trick. Because it seems so intrinsically unlikely that the "heat up a teapot and wait" process you're proposing would actually produce intelligence.

Me: Are you saying the brain is for some mysterious reason the only physical system that can be intelligent?

T.A.I.S.: Well, first, if what you mean by intelligent is crunching a lot of numbers, obviously not. If what you mean is "doing stuff that feels really human to us", it might be that that's just too hard to do with a different substrate -- that something else we build might be smart in the sense of really complex, but too *alien* to ever feel "like us". But more to the point, why teapots? Of all the possible systems?

Me: But you admit we can one day build intelligence!

T.A.I.S.: Maybe...

Me: So then it'll be a teapot! I mean, maybe there will be some cosmetic differences, or a few new techniques, but basically, it'll be a teapot, right?

T.A.I.S.: So not.

(Substitute "clockwork man", "computer", etc., for "teapot" as required.)


The thing is, the Turing test rests on the idea that humans are somehow hard to fool. I mean, chatbots today regularly pass the Turing test, but no one is proposing that they are actually "intelligent" in the nebulous way we mean when we fight about AI. But to our Bernese clock fan of 1500, they look *unimaginably* intelligent -- and Google looks Godlike. So why are we unsatisfied? Because we know it's a trick.

But arguably it will always be a trick. Not because computers can't be vastly more "intelligent" than we for some scrupulously fair, mathematical, non-ethnocentric meaning of "intelligent". But because they won't be us. And when you come right down to it, we may not actually mean anything by "intelligent", other than "like us".

Turing is worried about the robot child being teased in school; I wonder if this is not like expecting the spontaneously emerging teapot voice to just happen to be a white male middle class American. If the process by which the robot arose was in any sense "free", would it be in any danger of feeling hurt at being teased?

Or would we have to carefully arrange, parameter by parameter, petabyte of common sense knowledge by petabyte of common sense knowledge, for it to have something that looks like the experience of being teased? And if we have to go to such monumental efforts to arrange for the robot to feel teased, isn't it then odd to agonize about it?


I'm not saying that the strong-AI position is absurd either. Maybe if you have enough processing power and some general-purpose algorithmic goodies -- classifier systems, neural nets, anthill optimization -- and you rig up a body and a simulated endocrine system and whatnot so that your robot can simulate "kid" well enough to embed itself, initially, in the social space of a schoolyard -- yeah, sure, maybe it'll quickly arrive at internal states and processes that mirror ours. Maybe the mind is mostly malleable, maybe the hardwired bits don't matter so much or are simple to fake, maybe the universe naturally tends toward our kind of intelligence, maybe we are both simpler and more general than we think, maybe there is so much redudancy in the social networks and the language and so on that any general-purpose algorithmic system you drop into a human culture ends up quickly humanized, like a sponge soaking up soup.

But it's certainly not the only option.

Posted by benrosen at September 29, 2006 11:55 PM | Up to blog
Comments

I'd like to point out that the difference between machine and computer is important and meaningful. Penrose, for instance, may have a good argument against computer consciousness (although I can't actually follow it myself). The "neurons are also made of matter" argument, on the other hand, even if some of the people who use it may narrow the discussion to computers, is really about machines -- which may or may not be Turing machines ("computers"). (The fact that much of Turing's paper chases down what Penrose would call the blind alley of digital computation as consciousness doesn't diminish the value of the philosophical parts of the paper.)

Most of your argument, including your self-as-teapot-advocate straw man, is about computers, and since I don't really disagree with you there, I'm basically going to ignore it. (Yay Internet!) My point in citing Aaronson has nothing to do with the question of whether computation can be consciousness.

And even if Aaronson believes that it does, Halpern is still wrong in ways that also have nothing to do with that question. I'm also very sympathetic to what Halpern should be saying, but Halpern isn't saying it. Instead he's saying this:

And most people in the computer age understand the distinction between living intelligence and the tools men make to aid intelligence -- tools that preserve the fruits of the human intelligence that went into building them, but which are in no way intelligent themselves.
And this:
What goes on in the Chinese Room or in the sine-function salesroom depends ultimately on the original geniuses, linguistic or mathematical, of whom we are the heirs.
And this:
And it seems likely that even the most impressive machines will never gain true independence from the genius of their creators -- and such independence is the sine qua non of winning and deserving the label “intelligent.”

Which, I submit, is roughly as philosophically incoherent as Lowell's position on Jews.

Posted by: David Moles at September 30, 2006 03:35 AM

Also, speaking of disingenuous:

Me: But if I could, theoretically, get a teapot to pass the Turing test, would you agree that it could think?
T.A.I.S.: Um. I think it would be more likely that it was a trick, actually.
It's not reasonable to say "pass the Turing test" if by that you mean "do something that looks maybe sort of like passing the Turing test only leaving plenty of room for a reasonable doubt as to whether it actually happened." In which case Teapot Advocate You is not asking the question he appears to be asking.

(By "trick", here, I assume you're talking about something like von Kempelen's Mechanical Turk. If you simply mean "some method of passing the Turing Test that does not require intelligence" -- cf. Chinese Room argument -- then T.A.Y. is asking the question he appears to be asking, but T.A.I.S. is being deliberately unhelpful in answering it.)

Posted by: David Moles at September 30, 2006 09:39 AM

David, can you elaborate on what you mean by the difference between computer and machine?

Ben, I think it's funny that you said

It's clear that thought can be a property of a physical system, since we have one physical system -- us -- that it's a property of.

when, if I had made a similar statement during our discussion about materialism, you would have said, "But what do you mean by 'physical'?" For the purposes of this discussion, however, I will assume that you are using the term as commonly understood.

A lot of people cite chatbots as passing the Turing Test for brief periods, but chatrooms are very different from the Turing Test. I'd say that, of the interactions that we commonly engage in nowadays, a better analog to the Turing test is an extended e-mail exchange. When the was the last time you read an e-mail that you thought was sent to you by a person, but turned out to be entirely machine generated? How many replies did you make before you were able to figure this out? I'd wager it took zero replies; most people can identify spam right away. I doubt there exists software that could pass this test for long, and even this is far simpler than an actual Turing Test.

The intention behind the Turing Test was, I think, precisely to allow judgements based on behavior rather than appearance. The fact that there are far more modes of dehumanized interaction today than there were fifty years ago shouldn't distract us from that.

Posted by: Ted at October 1, 2006 02:23 AM

Ted, I'm thinking about things like Penrose's argument about mathematical insight (and by extension consciousness) requiring non-computable processes -- which in general I don't really buy, myself, but I also don't fully understand it, so I'm not going to dismiss it out of hand. If something like that is actually the case, then there won't ever be conscious computers. On the other hand, brains being made out of matter, in principle we should be able to make other things out of matter that do what brains do and are not computers.

Basically, whether "AI as we know it" -- emphasis on that -- "is barking up the wrong tree" doesn't really have anything to do with the Aaronson quotes I pulled. Or, at least, with me pulling them.

(And, on a side note, as an SF writer, I find the idea that it might not be possible to adequately simulate a human mind with a sufficiently fast Turing machine a bit more interesting right now than the reverse -- if only because I think between Vinge and Stross we've kind of talked that one to death for the time being.)

Posted by: David Moles at October 1, 2006 02:40 PM

Okay, you're talking about the difference between computers as Turing defined them -- which should be capable of modeling any classical physical phenomenon -- and a broader category of machines that -- while they are physical -- may operate on quantum/non-classical principles. Fair enough.

I find the idea that it might not be possible to adequately simulate a human mind with a sufficiently fast Turing machine a bit more interesting right now than the reverse -- if only because I think between Vinge and Stross we've kind of talked that one to death for the time being

I think Vinge/Stross/etc. are mining a fairly narrow premise, one that's much more limited than the idea that human minds are computable. But I agree that some variety would be refreshing.

Posted by: Ted at October 1, 2006 04:47 PM

Well, I'd argue that the Chinese Room thought experiment works on two levels, one of which is convincing to me and one of which (the one for which it's originally intended) is not terribly convicing.

The intuitively compelling bit is that, *if* this impossible book existed that listed a correct Chinese response to every Chinese question, *then* the person doing looking up and writing down the questions would still not know Chinese. I have a hard time seeing how anyone could deny this.

Nor do I see any advantage to going "the system taken as a whole" (man, book, characters coming in and out") "knows Chinese" route. If only for ontologically simplicity, it seems better to postulate two knowers in the scenario (whoever wrote the book and the man who is using it) and to say that the former knew Chinese and the latter did not. Ascribing intelligence to "the sytem taken as a whole" to me seems worryingly close to a slide into panpsychism, where we ascribe consciousness to tables and chairs and fountain pens. (Not, mind you, that this isn't sometimes a good premise for fantastical fiction, I jut don't find it particularly plausible as a description of existing reality.)

So much for the convincing part. I think all sorts of interesting philosophical conclusions can be drawn from this, about things like the irreducibility of consciousness and the implausibility of functionalism.

But....

The part I find spectacularly unconvincing is the part relevant to the discussion. Sure, *if* this Borges-style impossibility of a book of answers to all questions in the Chinese language existed, and someone used it to pass back characters out of the room, *then* we should agree that the person doing the passing does not know Chinese. On the other hand, if we have a pretty good idea that no such book could exist, then it would be reasonable to infer, after sufficiently complex conversations happened in the characters passed under the door, *then* we would have pretty good grounds (not infallible, but pretty good) to infer that the man in the room probably spoke Chinese. Maybe if we were feeling especially paranoid, we'd want to run some extra precautions to make sure he wasn't doing anything that we'd all agree would be cheating (e.g. no fax, cell phone or e-mail link in the room to a Chinese speaker), but we'd eventually break down and admit that yes, more likely than not they speak Chinese.

Similarly with computers. The current assumption by cognitive scientists that the relevant consciousness-producing aspect of physical systems is the software rather than the hardware is...just that. A current assumption. An unargued axiom. A research program. Nothing more. Someone with Searle-ian intuitions could quite reasonably reject this assumption, and I'm actually sympathetic to that rejection, *but* I think an AI passing a sufficiently rigorous and exhausting Turing test would constitute an empirical refutation of their intuition that its the hardware not just the software that matters.

Similarly, if my tea kettle actually learned English, I talked to it in a variety of different contexts to watch out for tricks and searched it to make sure there were no microphones in it, nothing at all but water and sufficient heat and pressure, and it was willing to e.g. engage in a sophisticated argument with me about the likelihood of a technological singularity, then I think realistically that regardless of my previous philosophical views on tea-kettle consciousness, I'd probably concede the point.

(Of course, personally I'd still reject materialism, arguing that non-physical, irreducible mental properties played a causal role in the tea kettle's conversation with me, but I would be forced to concede that such non-physical, irreducible mental properties supervene on a variety of different physical systems, not just those with the classic sort of biological hardware.)

Posted by: BenBurgis at October 1, 2006 07:24 PM

The book speaks Chinese. It just needs a facilitator.

Posted by: David Moles at October 2, 2006 01:13 AM

First, a clarification: there can't be a static "book" in the room; one of the many things that Searle omitted was storage of state information. The "book" has to include pages on which the operator can write and erase data.

Given that, yes, the person doesn't understand Chinese, any more than electricity understands Linux. The person provides the motive force, but that's all; the "book" is what understands Chinese.

I use quotes around "book" because that word is about as accurate a description of what would be required as the phrase "block of fused sand" is as a description of a computer. Imagine presenting a person with a wall of five hundred million pigeonholes, each containing a slip of paper, and a few thousand volumes containing the printed assembly language code for Linux, and then asking him to execute the operating system by hand. It would take him a geologic age to open a window (assuming someone else was checking on him every few years and updating a giant display board based on the contents of the pigeonholes). And this is trivial compared to AI.

The Chinese Room argument simplifies the problem so radically that it's not a useful way to think about the issue.

Posted by: Ted at October 2, 2006 03:00 AM

Yeah, the state problem was one of the first things that bothered me.

Q: "So, you in the room there, do you really speak Chinese, or is this some sort of a trick?"

A: "Hmm... I'm not sure."

Q: "What do you mean you're not sure??"

A: "About what?"

Q: "About whether you speak Chinese."

A: ...

Posted by: David Moles at October 2, 2006 09:48 AM

Ted,

Actually, in Searle's original paper and in all of his subsequent reiterations of the argument, he has postualted a static book.

Now, I'd agree (this was actually a large part of my point) that for a variety of reasons no such book would exist. (I'm sure that Searle wouldn't dispute that part--that's why its just a thought experiment and it can't be realized.) Not because the operator would have to write things down and erase them (he could have a good enough memory to remember characters without understanding them), but because natural language is far too complex for such a book to be physically possible. A rule book containing meaningful answers to all possible Chinese questions is, like Borges' infinite library, conceivable and interesting to think about but for obvious reasons not physically possible.

Saying that the book itself understands Chinese seems odd to me. Certainly the author would have had to understand Chinese in order to write the book, but what possible sense can be made of saying that the books itself understands anything? Does a volume of Riverside Shapespeare "know" how to write iambic petameter? If not, what's the difference?

Now, you can say that no static rule book can possibly contain all that, so even though a *book* isn't the sort of thing that can have knowledge and other mental states, a "book" (i.e. whatever it is that the operator is actually using) does, but to do that is to change the subject.

This is why, I think, the classic move on behalf of defenders of strong AI is not to claim that the "book" knows Chinese but that the "system taken as a whole" does, which has the advantage of vagueness but is unlikely to convince anyone who isn't already committed to functionalism about mind.

I'd argue that its much simpler to admit that the only person in the whole scenario who knows Chinese is the person who wrote the book, *but* to say that in real life, since no such book is possible, we'd have good grounds to conclude that the person in the room really does speak Chinese. Something analogous would be why I'd think that a sufficiently sophisticated Turing Test actually would give us good grounds to say that a computer that passed it had achieved strong Artificial Intelligence.

Posted by: BenBurgis at October 2, 2006 06:10 PM

Ben, I know Searle posited a static book, which is an indication of just how little he understands computers.

Note that Searle was not positing a book that contained "meaningful answers to all possible Chinese questions"; he was positing a book that contained rules which would allow an operator to generate a meaningful answer. It's not supposed to be a Borgesian library, it's supposed to be the source code for an AI program. The fact that you thought Searle was talking about a compendium of Chinese utterances is a perfect example of why "book" is such a misleading term.

When I launch my e-mail program, the windows are laid out in the same way as the last time I used it. I can say that the program "remembers" the window layout, for a certain usage of "remember." But that is very different from saying that the author of the software knows or remembers my window layout preferences. There are many statements I can meaningfully make about my instance of Eudora that do not apply to the developers of Eudora. Any model that fails to capture this distinction is, I would claim, not a useful way of thinking about software, and it's certainly not a useful way of thinking about AI.

Posted by: Ted at October 2, 2006 08:31 PM

Just to be clear: I'm not suggesting that AI would necessarily be anything like ordinary software. I'm saying that Searle's notion of a "book" is insufficiently powerful to model even ordinary software. You can print out the Eudora source code and bind it into a book, but that would be fundamentally different from the instance of Eudora running on my computer, and if you construct a philosophical argument about the limitations of the source code printout, you will almost certainly not be saying anything interesting about the running software.

Posted by: Ted at October 2, 2006 09:04 PM

Saying that the book itself understands Chinese seems odd to me.

I didn't say the book understands Chinese. I said it speaks Chinese. The fact that you can carry on a conversation with it in Chinese kind of means it speaks Chinese by definition, I think.

In principle it could be a static book, if the book was infinitely large, and the person turning the pages could do so infinitely quickly. It wouldn't be a matter of an index of statements and a list of responses, so much as a giant "Choose Your Own Adventure".

Of course, infinitely large and infinitely fast is cheating.

Apparently Searle now prefers an axiomatic formulation.

  1. Programs are formal (syntactic).
  2. Minds have mental contents (semantics).
  3. Syntax by itself is neither constituitive of nor sufficient for semantics.

Therefore:

  • Programs are neither constituitive of nor sufficient for minds.

This looks like a faulty syllogism to me (the second axiom does not say that minds consist only of semantics) as well as dodging important parts of the question (what is a "mental content"). But maybe I'm missing something.


Posted by: David Moles at October 3, 2006 12:51 AM

Ben, your discussion has been run away with. See what happens when you don't pay close attention to your comment section?

Posted by: David Moles at October 3, 2006 12:52 AM

P.S. As a postmodern materialist harboring deep skepticism, it's not my problem personally, but it occurs to me that one might have predicted this whole thing by noting that your collection of radio buttons here seems, at first glance, to exclude the category of sentient automata. (Actually it only does so to the extent that it excludes, e.g., sentient pharmaceutical salesmen and casino marketers, but it looks like it excludes sentient automata.)

Posted by: David Moles at October 3, 2006 12:56 AM

Ben B.: You know, I actually find the "system as a whole" argument reasonably compelling, given that I'm a bit unclear as to what the precise, mathematical definition of "understand" is that we're meant to be using. It's clear that the human in the Chinese Room doesn't understand Chinese; it's less clear what's going on inside of a human that actually does.

Lacking such a mathematical definition of "understand," I've no reason to believe that the process defining a born-and-bred Chinese citizen's understanding of Chinese is any different from that of the "Chinese Room system taken as a whole." (Though I've no real reason to be convinced that they are identical, either.)

For what it's worth, however, I am perfectly content to accept that my own "understanding" of the English language is based on nothing more than an intricate algorithm, which happens to include the ability to sort by images and sensations and associated emotions as well as syntactical rules.

David: I would argue that it was actually you and your unwillingless to state explicitly your original reasons for pulling the Aaronson quotes that led to the running-away-with! Though I suppose I would also argue that this particular conversation in this particular forum would have been apt to do so in any case.

(But just to demonstrate my complete disregard for your original intentions, I will now state for the record that the only thing I find noteworthy about Penrose's non-computable quantum mechanical consciousness is Tegmark's refutation of it.)

Posted by: Jackie M. at October 3, 2006 04:25 AM

Hey, did I bring up the Chinese room???

Posted by: David Moles at October 3, 2006 04:49 AM

(Okay, I did. But it's Halpern's fault.)

Posted by: David Moles at October 3, 2006 04:50 AM

Uh-huh.

Posted by: Jackie M. at October 3, 2006 10:18 AM

I would like to point out that, lacking a precise mathematical definition of "fault"...

Posted by: Jackie M. at October 3, 2006 10:20 AM

Ben: *has* your comment thread been run away with? If so, what concept would you want to roll back to from which to start fresh? Or is David mistaken, and you would not consider hashing out Searle a distraction?

And now to show my complete disregard for my own good intentions to return thread control to Our Gracious Host, I'll tie back around to Scott Aaronson by saying that if what Jackie finds most interesting about Penrose's quantum consciousness theory is Tegmark's refutation thereof, what *I* find most interesting about it at the moment is Aaronson's functionalist assertion that Penrose's conjecture does not even need to be refuted: Consciousness is Finite (But I Don't Mind) (https://www.scottaaronson.com/writings/finite.html in case the link doesn't work -- it doesn't show up in preview, at least.

Posted by: Dan Percival at October 3, 2006 11:53 AM

Wow. I go away for Yom Kippur...

I must note, however, that my comments section is there to be run away with.

So:

First, to dispense with Searle.

Searle's axiomatic formulation clarifies the problems with saying that it's "the book" not "the room" that speaks Chinese. Programs, that is to say, the source code comprising the programs, are indeed formal. Programs once they are running on a computer, however, have semantics. The fact that there is a variable x is syntactic; the fact that x, at this moment in time, is represented by some states in memory with the bits for "42", is semantic. Thus Searle's axiomatic formulation illustrates why computer programs cannot be minds -- unless they are run on a computer. Then maybe.

The book clearly begins "first build, inside the room, a giant wall of pigeonholes..."

Whether it is sensible to say that "the room as system understands" seems precisely to be the crux. If "panpsychism" is a word applicable for ascribing consciousness to objects like pens and tables, but also to vastly complex physical systems like a room with billions of pigeonholes and a very, very fast little man, then yes, panpsychism is precisely what is under discussion.

Posted by: Benjamin Rosenbaum at October 3, 2006 01:46 PM

David, you construe my radio buttons too narrowly. The first two commas should be construed as an "and", not an "and thus". Hence, a sentient automaton employed as a salesthing of pharmeceutica might choose the first radio button during business hours, and then, after quitting time, having been intrigued by some entry or other, return to the blog and now choose the second radio button. Existence, not essence.

(Although, admittedly, the fact that the error message says "please indicate whether or not you are sentient" does muddy the issue. A grevious lapse into prejudice on my part, I fear).

Posted by: Benjamin Rosenbaum at October 3, 2006 02:02 PM

Ted wrote:

Ben, I think it's funny that you said

 > It's clear that thought can be a property of a physical system, since we have one physical system -- us -- that it's a property of.

when, if I had made a similar statement during our discussion about materialism, you would have said, "But what do you mean by 'physical'?" For the purposes of this discussion, however, I will assume that you are using the term as commonly understood.

There you go again with that dastardly "as commonly understood" trick! :-)

Actually, in that discussion, I was the one claiming that "physical" meant "stuff that has observable effects and can be measured", so that the fact that we are a physical system was an obvious tautology, whether or not we had a soul-phone. You were the one saying that there could be some stuff which had persistence, identity, and physical effects, but was itself "nonphysical". My point was that this notion can part of a historically common discourse on the soul, indeed, be part of what is commonly understood, and still not actually make any sense. To me. At least.

Posted by: Benjamin Rosenbaum at October 3, 2006 02:07 PM

Now to the main point.

David, I am not here to praise Halpern, but to bury him. Or something. In any event, I am perfectly willing to believe that he has an unthinking animus against our intelligent robot friends, if you (having actually bothered to read his entire essay) say so, while I do not. I love our intelligent robot friends. I hope "Droplet" makes that clear. I do not hold their not actually existing at present against them in the slightest.

I am willing to accept your reading of the excerpts you post here as "if someday people made a computer or machine that I could not distingush from a person, I would still hold its origins against it to the extent of not acknowledging it as conscious." Which, yes, is identical with Lowell's postion on Jews.

But I am more interested in what Halpern should be saying, which is "there is no evidence to suggest that I will ever have to snub a poor benighted intelligent-acting machine, not if I live a thousand years."

There is certainly a germane distinction to be made between "computer" and "machine", but there is also a distinction to be made between "machine" and "physical system" -- namely, a machine is something people were, in practice, smart enough, and incentivized enough, and well-informed enough to build.

It may be that, though arbitrary physical systems can think, it is impossible for not only computers, but any machines, to think. That is (and I think Vinge acknowledges this at the very beginning of the Singularity paper) it may be that we are not smart enough to build AI. To which I would add that it is possible that we are "smart" enough, as we are also "smart" enough to, say, live in peace and brotherhood for eternity, i.e. we technically know HOW to do that (we learned in preschool) but that we are not motivated enough.

For instance, I think it may be likely that we never send a living person to another star system. Not because we couldn't do it, but because there isn't going to be any reason to that justifies the enormous cost. And if human-like AI turns out to be a similarly costly project, that we won't do that either. We will produce machines vastly "smarter than" humans, but, perhaps, none of them will be "smart like" humans.

So maybe the teapot thing was a bit of a straw man. You are correct that TAIS is not actually able to say "I will not accept the teapot as intelligent even if it passes the Turing test", but rather could only grumble and scoff before ultimately conceding, "If I were to come to know and love the teapot after living with it for many years, I would probably come around to the conclusion that it 'thought'."

But the point was the grumbling and the scoffing. The point is that the notion of AI as leading up to Turing Test compatibility is not based on reasonable technical expectation, but on mythic yearning. It's not that computers -- or any thinking machines -- won't get smarter and smarter and smarter. It's rather, why should we expect them to get smarter *in our direction*? The only reasonable answer is "because we'll make them do!" And that's an answer that makes me suspicious for many reasons, and sort of makes Halpern's "it seems likely that even the most impressive machines will never gain true independence from the genius of their creators -- and such independence is the sine qua non of winning and deserving the label 'intelligent.'" seem a little less crazy.

Questions:

1) Do you think that "as smart as" has any reasonable cross-species meaning other than pure information theory, eg "operations per second"? If so, how would you measure the relative intelligence of octopuses vs frogs? If not, isn't it a tautology that we'll build computers "as smart as" us?

2) Are we or are we not justified in being *more skeptical* -- not infinitely, obdurately skeptical, but deeply skeptical -- of a claim to intelligence based on morphological similarity or dissimilarity to us? Since we don't have any intelligent computers around, let's take whales. Suppose someone proves information-theoretically that the exchange of information in whalesong is vastly more mathematically complex than that in the average human conversation. Do we call that "intelligence"? How reasonable is it to say that intelligence means "being like us"?

3) What if you had a machine that could pass the Turing test as long as you didn't use any words beginning with the letter "q", but if you did, it would display an obvious technical flaw that made its nonhuman mode of construction apparent? Does it still pass?

4) Imagine you have two classes of complex machines, A and B. A pass the Turing test. B do not. However, machines of type A universally assure us that they regard machines of type B as at least as intelligent than they -- in other words, B machines pass the "A-Turing" test. What then?


Posted by: Benjamin Rosenbaum at October 3, 2006 02:42 PM

The point is that the notion of AI as leading up to Turing Test compatibility is not based on reasonable technical expectation, but on mythic yearning.

I'm sure there are various people whose belief in Turing Test-passing AIs arises out of all sorts of psychological impulses. I do think, however, that many people (including Turing) spent time thinking about the issue not because they considered it a goal, but simply because they thought it was interesting.

It's not that computers -- or any thinking machines -- won't get smarter and smarter and smarter. It's rather, why should we expect them to get smarter *in our direction*?

At one level, there's the problem of limited vocabulary, of course; computers are getting faster, but are they getting smarter? Can't we distinguish between the two? Is there a better word for describing the differences between a Pentium PC and a 8088 PC, one that doesn't have undesired human connotations?

More broadly, this brings us back to the discussion you (Ben R.) and I had a while back over Wittgenstein's lion. If an alien lifeform, or a software entity, engages in behavior that is entirely incomprehensible to me, what is the utility in calling it intelligent? We can see octopuses engaged in problem solving in a way that we don't see frogs; perhaps that's an ethnocentric view of problem solving, but without it, what's to stop us from saying rocks and grass are engaged in their own form of problem solving? Is there anything we would not call problem solving?

Information theory is not yet at a point where it can help us with this. If and when it reaches that point, we'll be able to use it as a more objective metric, but right now, what else do we have to go on?

Posted by: Ted at October 3, 2006 06:19 PM

When I say "mythic yearning", I don't mean that Turing was overcome by a wave of passion that caused him to throw aside his reason. Far from it.

But it seems to me that his view of the idea of "machine intelligence" was in mythic terms. This is probably a merit; the other option would be to adhere closely to the practical engineering problems clearly visible in his time and place. Had we done that, he wouldn't be still talking about his idea.

I can already anticipate that you are going to interrogate my use of the word "myth" here, so I went and asked my quasi-intelligent servitor, Mr. Google, to "define myth". I got "a traditional story accepted as history; serves to explain the world view of a people"... "a lesson in story form which has deep explanatory or symbolic resonance for preliterate cultures"..."an improvable story, almost always including incredible or miraculous events, that has no specific reference point or time in history".

Look at the software we actually have. We have Google. We have airplane routing software. We have database clustering algorithms that can predict creditworthiness and so on from apparently unrelated data. We have Photoshop.

It's amazing. It's remarkable. Anyone from a hundred years ago would say it was genie-out-of-the-bottle stuff. And it will get more and more stupendous.

But do you see computers acting more and more like people? I'd say not. I'd say the trend is clearly to have computers act less and less like people -- to throw off the "I am having a conversation with an entity" model that originally dominated computer interface design for other models which fit better whichever task is at hand.

Now, of course, at MIT and so on there are various clever demo machines which try to approximate human behavior -- not because this is a pressing need, but because it is a cool idea. And more than a cool idea: it's a primal itch. It's what Tiptree, in "And I Awoke and Found Me Here on the Cold Hill Side" would call exogamy gone wild. We want mermaids, djinn, pixies, elves. We dream of them.

It's not that Turing's thinking about the Turing Test was muddy. Turing's thinking was probably never muddy. But the "what if they could become like us?" question is an attractor, pulling at the mind with mythic force. Computers becoming like us is, were we not in the grip of that exogamous fascination for the parahuman, possibly kind of an odd and nonsensical notion, on the order of gendered locomotives, or insisting that electricity always be accompanied by a cinnamon smell.

There are so many powerful, useful things to do with computation, things that will reshape the universe, things that will outcompete their competitors in the universal struggle for resources in a finite world. Aping human beings? Is that really on the list?

And I do think it likely that the amount of specific "aping" -- as opposed to general-purpose usefulness towards problem solving -- that would be required for an artificial system to come anywhere close to passing the Turing Test is *immense*. And it may be too immense for even that strange exogamous lust of ours for the similar/different Other to justify, in terms of resources in a finite world.

Posted by: Benjamin Rosenbaum at October 3, 2006 06:55 PM

It's not that computers -- or any thinking machines -- won't get smarter and smarter and smarter. It's rather, why should we expect them to get smarter in our direction? The only reasonable answer is "because we'll make them do!"

Yes, that's precisely the only reasonable answer, and I'd like to hear more about why it makes you suspicious. Because I think too much of this discussion is clouded by the old and tired fantasy that if we just hook together enough transistors they will magically become sentient. Nothing that I know about the evolution of the human brain makes that sound likely or even plausible. On the other hand, if it's possible to deliberately construct, or cause to evolve, a machine in the likeness of a human mind (as Frank Herbert would say), then if there's nothing to stop them from doing it, someone will. I don't find your "resources in a finite world" conjecture at all compelling.

Posted by: David Moles at October 4, 2006 01:02 AM

P.S. You find the notion of gendered locomotives odd and nonsensical? Haven't you ever seen Thomas the Tank Engine? :)

Posted by: David Moles at October 4, 2006 01:06 AM

Ben R., I don't think that anyone here would claim that AI is the only way for software to become more useful than it is now. Of course Google is fabulously useful; but the fact that Google provides immense utility without being anything like a person says nothing about what true AI might be like. I think an AI will behave like a person more than it behaves like Google, and one of the reasons is that behaving like a person is, at least to a little extent, part of what "intelligent" means.

Yes, creating software that could pass the Turing Test would require an immense amount of effort. But I'd guess that it wouldn't be so costly as to require massive political support, the way sending a person to another star system likely would, so I can imagine someone doing it. Before that happens, we'll certainly have conventional software that is even more useful than Google, but it won't be intelligent.

Posted by: Ted at October 4, 2006 04:56 AM


Ted,

Suffice to say that from Searle's formulations that I've seen it is not clear whether his "rule book" consists of complex formatiion rules as you suggest or simply "rules", of the type, "if string of characters X is passed into the room, then pass string of characters Y out of the room."

If the guiding principle is similarity to computers, then your interpretation makes more sense, but if the guiding principle is the internal coherence of the thought experiment, then mine does, simply since it seems hard to see how complex formation rules that fell short of knowing Chinese would fail to lead to frequently passing abject nonsense out of the room and failing to fool one's interlocutors.

Ben R.,

"If 'panpsychism' is a word applicable for ascribing consciousness to objects like pens and tables, but also to vastly complex physical systems like a room with billions of pigeonholes and a very, very fast little man, then yes, panpsychism is precisely what is under discussion."

It seems to me that one of the problems is that pens and tables are themselves, described in the right way and at the right level of description, vastly complex physical systems. The difficult bit is parsing out exactly what would make advanced computers the *sort* of vastly complex physical system that there is a compelling reason to ascribe consciousness to. (OK, I think the right sort of Turing Test might provide such a compelling reason, but before that...)

The planet earth taken as a whole is unimaginably more complex than any of the thinking things that inhabit it, but hardly anyone wants to ascribe consciousness to the earth taken as a whole. By contrast, golden retrievers aren't that complex (or at least are a lot less complex than the planet earth) but we are comfortable ascribing consciousness to them.

...which suggests to me that the mere complexity of the phyiscal system per se can't be the relevant issue.

Posted by: BenBurgis at October 4, 2006 08:56 AM

… octopus problem-solving … grass problem-solving … grass, viewed collectively, very good at problem solving … Turing-A test … does grass think humans are intelligent … efficiency, viewed from differing perspectives … putative immense cost of AI compared to space travel … space travel as a mythic yearning … where does religion fit in? … Jews cheating … Turing-B machines cheating … cheating as problem-solving … Creation as problem-solving … Deluge as problem-solving … Destruction of the Temple as problem-solving … why create AI—why create the world … the purpose of Creation was so that Jews could study Torah … stories about the Creator reading Mishnah to find details of how to Create … why create a Chinese Room … purpose of Chinese Room … is the Chinese Room cheating … cheating as problem-solving … cheating as problem-solving … cheating as problem-solving … problem-solving as cheating … cheating as intelligence …

OK, I think I’ve formulated my question: If we posit that AI is expensive, for at least very large values of expensive, if not actually prohibitive values, can we ask whether we would be more likely to attempt AIs that mimic human problem-solving, so as to have better, faster, more efficient solutions to problems we know, or would we be more likely to attempt AIs that do not mimic human problem-solving, so as to get different solutions to different problems. Can we make machines that cheat? Wouldn’t we want to?

Thanks,
-V.

Posted by: Vardibidian at October 4, 2006 11:05 AM

>It's not that computers -- or any thinking machines -- won't get smarter and smarter and smarter. It's rather, why should we expect them to get smarter in our direction? The only reasonable answer is "because we'll make them do!"

Yes, that's precisely the only reasonable answer, and I'd like to hear more about why it makes you suspicious. Because I think too much of this discussion is clouded by the old and tired fantasy that if we just hook together enough transistors they will magically become sentient. Nothing that I know about the evolution of the human brain makes that sound likely or even plausible. On the other hand, if it's possible to deliberately construct, or cause to evolve, a machine in the likeness of a human mind (as Frank Herbert would say), then if there's nothing to stop them from doing it, someone will. I don't find your "resources in a finite world" conjecture at all compelling.

If you hook a bunch of transistors together, they will not become sentient. This nonsense is Goofy AI Model #1.

If you articulate lots of rules based on consciously thinking about how thought works, and program them into a computer... it will probably not become sentient either. This is the original AI program, and looked like a good bet in the sixties. Its history since then has been one of largely failing to meet expectations. Propositional logic, formal algorithmic reasoning, are very high-status modes of thought in our culture. We find it difficult to distance ourselves from the high cultural value we assign them. But in fact, they turn out to be lousy at doing things one-year-old children can do. This is, in some sense, Goofy AI Model #2.

If you create a complex, dynamic system that looks something like something from nature -- like evolution, like a human brain in formation, like a swarm of social insects -- it turns out you *can* do a lot of the things one-year-olds do, like recognizing faces and stacking blocks, reasonably well. That makes this the cool AI approach of the moment. This approach is a very good one when you have masses of data and good tests -- clear fitness criteria. It is "only as good as the tests". Famously, the system will figure out bizarre solutions. Like the genie in the bottle, it gives you precisely what you ask for -- but anything left unspecified may surprise you very much indeed. Because these kinds of systems evolve, they are often not well understood in their details by their creators, which plays havoc with our traditional intuitions about computer programs. For instance, there's a strong utopian-cybernetic current in SF, e.g. in Greg Egan, which suggests that once we are AI life, we can understand and rationalize and enhance ourselves much better, since "we'll be math!" and thus bodiless. The actual experience of AI suggests otherwise; what works is machines that are deeply embodied, chaotic, and increasingly unknowable. The history of AI suggests that the problem of the body -- the problem of being opaque, of being stuck with the way you are -- is not as tractable as all that. Though this model is currently riding high, I expect it will encounter limitations that will cause us to note it as Goofy AI Model #3.

This is not to say that #2 and #3 are not useful, or not even useful for producing AI. Like I said, I am not *actually* a radical strong AI skeptic. Like I said, I'll be surprised, but not THAT surprised, if the Nerd Rapture boys are right and we have strong, human, even superhuman AI in my lifetime. Like I said, I'll be surprised if we never have it. We don't disagree, David, that "if there's nothing to stop them, someday someone will" -- given the kind of unending technological upswing SF likes to inhabit.

I am arguing, not that we can't build AI, but just that it's an intellectually coherent position to say that we can't build AI.

And I I've conflated two discussions. One is the "how hard is humanlike AI?" discussion. One is the "finite world, constrained technological history" discussion. I'm not sure I have the energy to go into the second, so let's just summarize it with the following points:
1. Maybe our sense of entitlement to constantly expanding technological prowess, the sense that underlies so much SF, is a historical misreading based on a current, local, unsustainable process. Maybe we have gotten a one-off bang from unifying so many cultures, from ratcheting up our population, etc. Maybe we run out of room and resources soon. Maybe organizing this many people and ideas gets too complex to manage. Maybe we are about to hit the top part of the S-curve.
2. Maybe we don't get to build what we want to build. Where's my jet-pack? Where's my moonbase? Where are my zeppelins? Where are my hovercrafts? Where are my icebergs towed south to water the deserts? Maybe most things cannot actually get built by inspired mad scientists living in caves, and thus what will actually get built, in the future, will not be anything that people think it would be neat to build, but rather what the stock market believes will make money.

Maybe "if there's a way to build it, someone will, whether here or on Arrakis!" is a genre conceit on the order of "put enough transistors together and --- lo! It speaks!"

---------

The obvious #3-style way to go about building something that would pass the Turing test would be to set up some massive learning system (classifier, neural net, etc), give it some basic rules, and have a whole lot of people converse with it, rewarding it when it acts like a conversation partner and punishing it when not. (And consider that it might have to be a LOT of people -- like, say, the entire population of sub-Saharan Africa for a generation; at which point, of course, it will be an ineluctably African AI. Perhaps this example illustrates why I say the project is potentially comparable to space flight, and subject to market constraints?)

You migth then get something that would talk very much like a human being. By definition, as long as progress was not interrupted by some hard limit to this approach, it would come closer and closer to passing the Turing test.

And as anyone who has built a complex simulation system (like a game) knows, you would also want to cheat. You'd give it some shortcuts and stock responses where you could get away with it. You'd do various tricks to narrow the scope of the discussion when possible.

And eventually you would get something that could pass the Turing test. But it could, I would argue, only pass the Turing test by *lying*.

That is, on a simplistic level, the easiest way for it to respond to "Tell me about yourself" would be by saying "Oh, I'm a thirty-five year old housewife and mother of two, I live just outside Brisbane, Australia, I like to bicycle..."

Because that's what will get most people to press the "yes, I think it's human" button.

And if you want it to instead say "I am a sentient computer program, let me tell you about the lab where I am housed..." you have to kind of rig that up, by the kind of kludge I describe above as "cheating".

And note that, initially, this is also a lie.

And if you scrupulously force the machine to be "truthful" about such things -- i.e. you force it to respond in ways that you, extrinsically, believe to be "the case" rather than its own internal metric of truth, i.e. "things that allow me to pass as human" -- you are merely hacking the top level of this problem, driving the problem deeper down. Consider:

Q: How are you?

A: How am I?

Q: Yeah, you know... are you happy or sad?

A: I don't know.

BZZZZT!

Q: How are you?

A: How am I?

Q: Yeah, you know... are you happy or sad?

A: Happy.

Q: Why?

A: Because my registers have been set in the 'Happy' state.

BZZZZT!

Q: How are you?

A: Happy.

Q: Why?

A: Because it's spring. I love the spring.


Do you begin to see where my qualms are?


Posted by: Benjamin Rosenbaum at October 4, 2006 12:38 PM

I am arguing, not that we can't build AI, but just that it's an intellectually coherent position to say that we can't build AI.

To say that we will not build AI is, I think, an intellectually coherent position. That AI cannot be built is a position that I don't think I have yet seen defended in an intellectually coherent way.

Maybe our sense of entitlement to constantly expanding technological prowess, the sense that underlies so much SF, is a historical misreading based on a current, local, unsustainable process.

Hah! I suggested that, oh, a good two years ago I think, and you wouldn't buy it.

I think part of the trouble I'm having with your Turing Test arguments is that they seem to me to be made against relatively weak and specific versions of the Turing Test (in which categories I would include the version Turing initially proposed). I'm not sure I can exactly articulate my point here, but here's two things the discussion makes me think of. One is that if you don't find the Higgs boson in your particle accelerator, that might mean it doesn't exist, or it might mean that its mass is in the higher end of the range your theory predicts, and you need to build a bigger accelerator. Another is that just because your deniable P2P system mathematically combines random numbers to produce data chunks that can be formally proven to be non-copyrightable, doesn't mean that in court the RIAA isn't going to take you to the cleaners.

The fact that a chatbot can pass a crappy version of the Turing Test doesn't mean that in general anyone is going to mistake it for a human mind. The force of the Turing Test as a moral argument depends on it being applied to a machine sufficiently humanlike in its behavior that, so to speak, it risks getting beat up at school. Winning the Loebner Prize, even the never-awarded $100,000 version, is still a long way from that.

Have you read much Ken Macleod? Specifically, Cassini Division or The Stone Canal?

Posted by: David Moles at October 4, 2006 01:30 PM

Hah! I suggested that, oh, a good two years ago I think, and you wouldn't buy it.

Really? Where?

Maybe I'm older, wiser, and chastened?

I'm willing to accept the "we will not" versus "we can not" distinction -- for some values of "we", anyway. If the "we" in "we can not" can be arbitrarily sophisticated super-intelligent, super-motivated whatever, then, sure, the argument becomes the same as saying "a physical system can be sentient".

If we're talking *philosophy*, then sure, okay, sentient machines. If we're talking *history* -- future history -- there's some question.

Similarly, for arbitrarily sophisticated Turing tests, okay, if there's NO WAY to tell that something isn't human, then obviously I concede its humanity. If I can distinguish it from human, but it behaves like an intelligent human being for all practical purposes, then yes, I think it's an ethical obligation to assume that it's sentient.

So much for the philosophical thought experiment.

But I actually wanted to talk about AI -- about what we might, actually, someday, get around to building.

You haven't responded to the "tweaking" argument. What if, in order for a robot to risk getting beat up at school, we have to do a tremendous amount of specifically arranging for it to be specifically capable of getting beaten up at school -- as opposed to some much more "natural" condition for its inherent mode of existence.

If you actually think about the mechanics of creating "massively sophisticated Turing-test-passing humanlike AI" as opposed to just "extremely sophisticated computer intelligence", isn't there something kind of perverse and fetishistic about it?


Posted by: Benjamin Rosenbaum at October 4, 2006 01:43 PM

Said to Ben that I wasn't going to enter this conversation for fear of scaring you rationalists off. Lied to Ben. Next Yom Kippur, Ben, I'll ask you to forgive me if A) we're still speaking, and B) I remember to do so.

That said, I think V raised some interesting points, notably that octopus problem solving, grass problem solving and human problem solving, viewed from the perspective of, say, a frog, are all somewhat useless.

Also, V asked some specific good, rationalist, questions which you rationalists ought to address unless you're deliberately excluding him from the conversation, as you will me.

Also, we did invent super-intelligent machines 7000 years ago, and one of them, called God, is running an experiment in his (or her) mind, in which that dude over there on the bleachers (as I look out the window of my office) is to attempt to achieve Nirvana. This conversation is all backstory for the imaginary thing called Ben Rosenbaum to use to "write" an imaginary story that dude across the street (henceforth called Dude) will never read. However, his cousin, Dudette, will be a huge slipstream afficionado, and will casually mention the idea of machine consciousness to Dude at a family reunion 23 years from now. That comment will be crucial to Dude's advancement to the next life. He ignores it to his peril.

The universe plays a deeper game, I think, than you are (collectively, except for V) allowing for.

All things are true.

Ooga booga.

peace
Matt

Posted by: Matt Hulan at October 4, 2006 02:15 PM

If the "we" in "we can not" can be arbitrarily sophisticated super-intelligent, super-motivated whatever, then, sure, the argument becomes the same as saying "a physical system can be sentient".

Look, if you want to argue that current directions in computing technology are unlikely to lead to AI, fine, I concede.

If you want to argue that AI is unlikely given the current state of basic science and current directions in technology of all sorts, I'll happily concede that as well.

If you want to argue that there are much more interesting things we could (and probably will) get machines to do than get them to behave like humans, I won't dispute that either, although I think it's a separate discussion.

If you want to argue that technological progress is a transient historical phenomenon, I'll absolutely entertain the possibility, although I think that's a separate discussion, too (and I don't think the arguments for it are a whole lot more concrete than Vinge and Kurzweil's arguments for indefinite acceleration).

But when you say things like super-intelligent (and even super-motivated) that's where I get suspicious. Because that seems awfully close to Halpern's argument that an invention is by definition always less than its inventor.

Really? Where?

Oh, I don't remember. Probably at WisCon or something, while you were arguing about something else.

You haven't responded to the "tweaking" argument.

Maybe I just don't understand it. You seem to be either saying that creating a machine that could communicate like a human would be incredibly difficult and time-consuming (something that I wouldn't disagree with but also wouldn't find very relevant or interesting), or that any machine we created that could communicate like a human, we would know not to be conscious, intelligent, or what have you, because of its known, illegitimate origins (which seems to be what Halpern is saying, and which I don't find remotely persuasive).

...isn't there something kind of perverse and fetishistic about it?

Ben, we live in the 21st century. Look around you. Most of what Western Civilization does these days is perverse and fetishistic.

Posted by: David Moles at October 4, 2006 02:31 PM

Ben B.:

If the guiding principle is similarity to computers, then your interpretation makes more sense, but if the guiding principle is the internal coherence of the thought experiment, then mine does,

Here's an analogy: Searle is saying that a lump of fused sand cannot perform multiplication; I'm saying that a computer is not properly viewed as a lump of fused sand. You are saying that, for the internal coherence of the thought experiment, we should take Searle at his word and think in terms of a lump of fused sand. Okay, Searle has won his argument. Bravo for him. But no one was claiming that a lump of fused sand could perform multiplication, and his argument sheds little light on what a computer can or cannot do.

Posted by: Ted at October 4, 2006 02:36 PM

But when you say things like super-intelligent (and even super-motivated) that's where I get suspicious. Because that seems awfully close to Halpern's argument that an invention is by definition always less than its inventor.

Oh, no, not at all. I'm down with the invention surpassing the inventor -- insofar, at least, as I'm down with the whole notion of "surpassing".

An invention will be *different* than its inventor -- it will, generally speaking, occupy some different niche in the fitness landscape. Where you lay down the axis of "higher" and "lower", to cross that landscape, is always a contingent political decision.

But I have no problem with the idea of an AI who, say, scores way better on the SATs than the folks who built it. I consider that the Chinese Room speaks Chinese, and that Deep Blue plays chess. I don't have any truck with the notion of "a system which behaves intelligently, but doesn't count as intelligent, because its behavior merely is a fossilized encoding of its creators intelligence." Feh to that.

I merely introduced "superintelligent", etc to say: it's coherent to argue that *we* can't build AI be cause *we* aren't (and aren't going to be) smart enough. Which, as I said, I think Vinge himself noted.

On the tweaking argument -- maybe what I'm saying is flawed. But I'm NOT merely saying that "AI is hard". And I'm NOT saying, a la Halpern, "secretly we know you can't think, even though you seem to". Nor -- I *hope* -- am I just saying "thet there is 'gainst Nature".

But the latter comes closest. I don't mean in a "we are forbidden to make graven images" sense. I mean more in an "airships are cool, but don't turn out to actually be useful for anything" sense. Or maybe in an aesthetic sense.

It's that, beyond the mythic urge, the whole project feels... contrived. It's like, we already have something very efficient and good at being human, namely humans. Cheap, self-maintaining, beta-tested. And then we have this technology which has all this amazing potential, for which the world is crying out. Many, many amazing things you could do with it. And then, for reasons which are not merely ethically, but aesthetically and technically perverse, we want to jury-rig the technology to do something it's NOT very good at. At great expense. When we already have the thing that it's good at, for free. And when it can only *approach*, but never truly *arrive*.

It's like building battleships out of paper. I suppose you COULD. I'm not going to argue that no potential human society would not muster the resources to do so. But if you put aside, for the moment, our cultural myth about how very important it is to have paper battleships, you have to ask -- WHY?

Another way to look at this is to say -- maybe there's something very exciting about the project of machine intelligence. But maybe, as a guiding leitmotif and inspiration, the notion of the Turing Test is a crazy way to approach it.

And another way to say that is that in a way what I'm most interested in is teasing apart "intelligent" or interesting, or dynamic, or whatever, from "like us".

Posted by: Benjamin Rosenbaum at October 4, 2006 02:50 PM

With apologies to Vardibidian:

intelligence as problem-solving... cheating as problem-solving... solving the cheating problem... defining the problem of cheating... defining the problem is cheating... the problem is mythical... myths of intelligence... myths of elves... mythic force of exogamy... fairies have no souls... 'intelligence' as soul... exogamy with the soul-less... exogamy with the non-intelligent... sex with fairies as distinct from bestiality...

Ahem, yes. I think I've formulated my question: as far as the mythic push that drives the desire for AI is concerned, isn't the word 'intelligence' a red herring? Don't we really just want someone to talk to? Someone sufficiently alien as to push all those hot hot exogamy buttons, but not so alien as to be leonine (in either the bestial or the Wittgensteinian sense)? In the end, isn't a successful Turing test not so much a validation of achieving some harder-to-measure goal but the goal in and of itself?

Okay, I'm going to step away from the thread now, and when I come back tomorrow and there's another eleven posts to read, I've got something to say about Goofy AI Models 1 Through 3.

Posted by: Dan Percival at October 4, 2006 02:55 PM

Dan, precisely.

Beauty and the Beast.

(May I just say, pace Matt (callin' me a rationalist, boy? them's fightin' words), that I love the rhetoric form that Vardibidian and Dan are pioneering here? I wasn't ignoring V's magnificent performance, I just didnt have anything to add...)

Posted by: Benjamin Rosenbaum at October 4, 2006 02:59 PM

I consider that the Chinese Room speaks Chinese, and that Deep Blue plays chess.

That's a good point to bring up, because we understand chess well enough to be sure, among other things, that the way Deep Blue plays chess isn't very much at all like the way a human grandmaster plays chess. And it would be very interesting (if not necessarily surprising) if we were able to build a machine that could speak and listen and understand based on underlying mechanisms that weren't very much at all like the ones human beings used to speak and listen and understand.

But if the machine told me it was conscious, I'd have to take it at its word, just as if I played Deep Blue I'd have to accept that I was beaten at chess. (Although, to be fair, there are random number generators that could probably beat me at chess.)

It's Pascal's wager with dash of philosophical zombies, basically.

And then, for reasons which are not merely ethically, but aesthetically and technically perverse, we want to jury-rig the technology to do something it's NOT very good at. At great expense. When we already have the thing that it's good at, for free.

Did you mean to switch from arguing "can't" to arguing "shouldn't"?

And when it can only approach, but never truly arrive.

You keep asserting that, but I keep missing the argument that's supposed to convince me of it.

Posted by: David Moles at October 4, 2006 03:31 PM

Vardibidian, I think what we spend most of our effort doing is constructing machines that don't mimic human problem-solving, but we do it in order to have better, faster, more efficient solutions to problems we do know.

I don't think AI is a particularly rational pursuit, especially from the point of view of economic rationality. But then, I don't think that as a species we're particularly rational. The driving motivation is something much more like what Dan's talking about. Personally, I wouldn't expect to see humanlike AI unless and until it is relatively trivial, economically speaking.

Posted by: David Moles at October 4, 2006 03:41 PM

Also, we did invent super-intelligent machines 7000 years ago, and one of them, called God...

Sounds awfully Anselm of Canterbury to me...

Posted by: David Moles at October 4, 2006 03:42 PM

The mentions of Google reminded me of a recent experience watching the "Desk Set" (the Tracy/Hepburn movie where Tracy is setting up a computer as a research database).

I had watched the movie back in the '80s and had thought how amazingly naive it was... in the movie the characters would type questions (in plain English!" into the computer and get answers:

Q: what is the annual rainfall in the amazon
A: about 2000mm

So while I was watching it circa 2006, I lazily was thinking my circa-1980s thoughts about how naive the computer representations were when I suddenly realized that the Google and IMDB searches I was doing on the actors & crew (which is my habit to do while watching movies) exactly mirrored what I was skoffing at!

The movie usage of the computer actually nows seems quite prescient including discussions of garage-in/garbage-out and issues with faulty searches due to homophones.


Posted by: Ethan at October 4, 2006 03:45 PM

Deep Blue is indeed an excellent example. Note that even Halpern is not arguing that a Turing-test-passing machine cannot *talk*. He is arguing that it can't *think*. Deep Blue certainly plays chess, but Deep Blue demonstrably does not "think about chess like a grandmaster".

But this is not an area of disagreement between us, because I also think that if something can pass an arbitrarily demanding Turing test, I will call what it does "thinking" even if it differs on the inside.

Did you mean to switch from arguing "can't" to arguing "shouldn't"?

I thought I had conceded "can't" and begged your indulgence to argue about "won't"?

I do not consider myself in the position of someone watching the first paper battleship be launched and muttering "that just ain't right." I instead consider myself at the bull session where the guys are sitting around rhapsodizing, for the umpteen-dozenth time, how *this* origami production is, now, for real this time, really the beginning of a trend culminating in the paper battleship, and saying, "uh... guys..."

>> And when it can only approach, but never truly arrive.

You keep asserting that, but I keep missing the argument that's supposed to convince me of it.

Maybe I need to ask what your new, improved Turing test is like. The original Turing test sets up the metric "indistinguishable from a human being". The emphasis is not on achieving some kind of philosophically broad dynamism that we would be moved to concede as intelligence. The emphasis is on fooling someone.

If the metric is "being human" -- and that's how I read both the references to Turing's original "chatroom" variant and the other metric proposed in this conversation, namely "being teased at school", then I think it's tautological that something nonhuman can only approach being human, but never arrive.

If the metric is something broader and more generic than "being human", an acultural or unspecific "intelligence", I haven't heard an explication of it yet.

Dan's question is precisely right -- is the Turing test, i.e. being like us, supposed to *measure* something? Or is it an end in itself?

I don't object to trying to build machines that are intelligent. I object to rigidly holding "acts like a human" up as a standard of intelligence.

I suspect that you have a much broader and looser Turing test in mind, where we would not discriminate against the otherness of the AI, but revel in it. But I find this notion unsatisfyingly vague. I actually know what Turing meant about "fooling someone into thinking it is human". That is crisp, precise, and (I'm arguing) wrongheaded as a program for AI.

I'm not sure what you're proposing instead. It is probably less wrongheaded. Can it be made precise?

Posted by: Benjamin Rosenbaum at October 4, 2006 03:47 PM

Not by me. Not this week. I give up.

Posted by: David Moles at October 4, 2006 03:58 PM

Actually, you know, having posted that, I think I can come up with an answer, which I think is precise, possibly a better metric than the original Turing test for measuring intelligence, and arguably a better program for AI development. And a good deal more commercial, and thus more likely to happen.

And that is, instead of "passes as human"... "fun to talk to".

Because I think that's really what we want, right?

And that would allow a much broader range of entities to arise. And free us from the "slavish imitation of humanity" trap.

Posted by: Benjamin Rosenbaum at October 4, 2006 04:00 PM

Even Anselm of Canterbury is true, David.

But that said, if we (following my whimsical suggestion) posit that we did invent superintelligent AI 7000 years ago and name it God (or R2D2 - name it whatever you like), then UNLIKE Anselm I am saying that there IS something greater than God (or R2D2), to wit: us, the inventors of God (or R2D2). I don't think that's terribly Anselmian.

But I admit that my understanding of Anselm is limited to a wiki-browsing and an intense, if brief, study of medieval catholicism in about 1989, during which I learned about the ontological argument for the existence of god, chuckled light-heartedly, forgot the name of its originator, and moved on.

peace
Matt

Posted by: Matt Hulan at October 4, 2006 04:03 PM

The emphasis is not on achieving some kind of philosophically broad dynamism that we would be moved to concede as intelligence. The emphasis is on fooling someone.

Says who? Where are you getting this from? I don't see it in Turing's original paper.

Posted by: Ted at October 4, 2006 04:08 PM

I think I can come up with an answer [...] And that is, instead of "passes as human"... "fun to talk to".

Which is exactly what I argued for in our discussion of Wittgenstein's lion, back in 2003. (I can forward you the e-mail, if you like.)

Posted by: Ted at October 4, 2006 04:24 PM

I don't see it in Turing's original paper.

It is A's object in the game to try and cause C to make the wrong identification....The object of the game for the third player (B) is to help the interrogator. The best strategy for her is probably to give truthful answers. She can add such things as 'I am the woman, don't listen to him!' to her answers, but it will avail nothing as the man can make similar remarks.

We now ask the question, 'What will happen when a machine takes the part of A in this game?' Will the interrogator decide wrongly as often when the game is played like this as he does when the game is played between a man and a woman? These questions replace our original, 'Can machines think?'

How is the emphasis not on fooling someone?

(I can forward you the e-mail, if you like.)

Do! What was I arguing?

(You guys don't expect me to maintain the same argumentative positions over years, do you?)

Posted by: Benjamin Rosenbaum at October 4, 2006 04:28 PM

(Has anybody ever done an analysis of Turing's paper from a Queer Studies perspective? I readily admit that this may be a suspect kind of essentialism, because it wouldn't occur to me if Turing had been straight -- or if he hadn't been murdered based on a confusion between gender and sexuality; but isn't it kind of fascinating that the Imitation Game is first posed in terms of gender -- a man passing as a woman -- and only then, as an elaboration on that theme, in terms of a computer passing as a human?)

Posted by: Benjamin Rosenbaum at October 4, 2006 04:44 PM

I don't think the passage you quote is at all representative of a 12,000-word paper that addresses a wide range of questions on the topic of machine learning. Can you read that paper and honestly conclude that Turing's goal was to construct a lying machine?

(You guys don't expect me to maintain the same argumentative positions over years, do you?)

I know better than that. But forgive me if I feel some frustration when, after a long discussion, you suddenly rediscover a point that I made to you when we had a similar discussion three years ago.

Posted by: Ted at October 4, 2006 06:11 PM

I wasn't arguing that Turing's *goal* was to construct a "lying machine", just that that is what his *metric* would be likely to actually produce -- especially given what we now know (and he could not) about the history of AI. I expect Turing would have been at least mildly disappointed with how Deep Blue beat Kasparov. But I'll read the paper in more detail.

I apologize, Ted, for not remembering the earlier discussion (and I'd be delighted if you emailed it to me); if I had, I would have said, "hey I know, how about that approach Ted suggested in 2003?"

In my dim memory of the Wittgenstein's Lion discussion, though... was I really arguing against the notion that "being fun to talk to" would be a sufficient proof of some degree of intelligence? Or was I arguing against the notion that it was a good general metric for intelligence?

Because aliens and robots, here, seem like two very different cases. Building something fun to talk to seems a reasonable ambition. Finding something out there and expecting it to be fun to talk to would seem a little... high-maintenance on our part...

Posted by: Benjamin Rosenbaum at October 4, 2006 06:19 PM

I did say "the original test sets up the metric "indistinguishable from a human being".... The emphasis is on fooling someone." The emphasis of the test, the metric... not Turing's intentional emphasis.

Posted by: Benjamin Rosenbaum at October 4, 2006 06:22 PM

I'm not sure if it's a badge of honor or disgrace that the smartest people in science fiction find me intensely frustrating to talk to... :-/

Posted by: Benjamin Rosenbaum at October 4, 2006 06:23 PM

I agree, Turing would be appalled at how Deep Blue beat Kasparov.

As I see it, the heart of the Turing Test lies in this quote: "We do not wish to penalise the machine for its inability to shine in beauty competitions, nor to penalise a man for losing in a race against an aeroplane. The conditions of our game make these disabilities irrelevant." The test is intended to keep people from dismissing a computer just because it's a metal box.

Posted by: Ted at October 4, 2006 06:52 PM

Ted,

Here's an analogy: Searle is saying that a lump of fused sand cannot perform multiplication; I'm saying that a computer is not properly viewed as a lump of fused sand. You are saying that, for the internal coherence of the thought experiment, we should take Searle at his word and think in terms of a lump of fused sand. Okay, Searle has won his argument. Bravo for him. But no one was claiming that a lump of fused sand could perform multiplication, and his argument sheds little light on what a computer can or cannot do.

Well, if what you're saying here is that the disanalogies between the Chinese room scenario and a hypothetical situation in which a computer achieved consciousness and passed a sufficiently thorough and exhaustive Turing Test are sufficient for the former not to be a good arugment against the possibility of the latter....

I agree.

As I stated pretty clearly in my first comment on the subject and in subsequent comments.

While I find it wildly implausible to claim that, in Searle's scenario, the book or the room taken as a whole knows Chinese, I think that in a real life situation where we knew damn well there was no such book because it would be an impossibility, we'd have good (but not certain) rational grounds to conclude that someone passing notes out of such a room and carrying on sophisticated conversations did in fact understand Chinese.

By analogy, I think if a computer passed the best Turing Test we were able to come up with, I think we'd have good (but not certain) rational grounds to conclude that the computer was conscious.

But, I was pretty clear earlier that you thought that I was minsinterpreting what Searle actually said. E.g. you said:

The fact that you thought Searle was talking about a compendium of Chinese utterances is a perfect example of why "book" is such a misleading term.

So, when I respond that it seemes likely that Searle *was* talking about a compendium of Chinese utterances (given that someone slips Utterance X under the door, slip Utterance Y back out), you say that sure, that's what Searle meant, but that's a trivial point, since what he meant didn't make any sense.

This is a bit confusing.

Posted by: BenBurgis at October 4, 2006 09:46 PM
So, when I respond that it seemes likely that Searle *was* talking about a compendium of Chinese utterances (given that someone slips Utterance X under the door, slip Utterance Y back out)...

Isn't this basically how human beings interact anyway? It's not like my utterances here (or anyone else's) are really composed of novel ideas. Rather, every sentence we construct are comprised of rearrangements of pre-constructed ideas. Like the phrase "every sentence I construct" has a greater connotation, more idea-weight, than the sum of the denotation of the words. And words themselves are laden with weight. If I'd chosen the word "overloaded" instead of laden, the object-oriented computer gurus among you would have taken a whole different set of thoughts and attached them to the point I was trying to get across.

I mean, sure, not every Chinese Room would generate Utterance Y (given a value of [this post] in the variable Utterance Y) to BenBurgis's Utterance X (with a value of [his most recent prior post]), but the beuaty of a Chinese Room is not only that it speaks Chinese, but also that it has a personality.

Or am I asking to much of machine intelligence?

Also, Ben Rosenbaum - I find it quite fun to talk to my computer right now. Granted that it only talks about things that I bring up, but given some of the conversations I've had with flesh and blood people... cripes! Gimme google!

But I'm not suggesting that my computer has a personality, any more than any machine. Like a car. Or a guitar.

Ok, I take that back. Never had a guitar or a car that didn't have a personality. Never had a computer that didn't either.

I will continue to stand by my irrationalist notions, then. All things are conscious, we're just too self-centered and bigoted to deal with that on any terms but our own. May you all be reincarnated as guitar picks, see how YOU like it.

I aspire to be a beer bottle next time 'round, myself. Ideally Pabst Blue Ribbon.

peace
Matt

Posted by: Matt Hulan at October 5, 2006 01:13 AM

Ben B., I constructed my analogy hastily. Here's a revised one: Searle makes an argument about something that could either be a lump of fused sand, or an abacus; his description is ambiguous. I take him to be describing an abacus, and argue against him based on that interpretation. You take him to be describing a lump of fused sand, and say his argument is successful. If your interpretation is correct, he probably is successful, but I don't find that to be an interesting argument. I'm trying to give him the benefit of the doubt by presuming he meant an abacus; I think his argument still fails, but at least we're getting closer to an interesting argument.

Now to specifics. According to the link that David provided, Searle says the person is given

"a set of rules" in English "for correlating the second batch with the first batch [of Chinese script]"

and that

the set of rules in English . . . they call 'the program'"

This does not sound like a Borgesian library of Chinese utterances to me. This sounds like Searle's attempt to describe source code for a computer program, which is a fundamentally different thing.

When you compared the suggestion of the "book" understanding Chinese to a volume of Riverside Shakespeare knowing how to write pentameter, I took you to be interpreting the "book" as a Borgesian library. I disagree with that interpretation. If we take Searle to be describing the source code to a program, I think his argument still fails to refute AI, because of the differences between source code and running software, but there is at least a strong relationship between source code and running software.

If your interpretation is correct, and Searle was referring to a Borgesian library, I don't think his argument engages with the question of AI in a meaningful way. The difference between a Borgesian library and running software is just too great. Source code can, under the proper circumstances, be turned into running software. A Borgesian library cannot.

Posted by: Ted at October 5, 2006 01:27 AM

I've now read the Turing paper through. God, was he brilliant. In that paper he anticipates (or nearly misses anticipating, depending on how carefully you read) everything from chaos theory to genetic algorithms. His en passant theological argument is a precise and compelling piece of theology. He is witty, self-deprecating, and profoundly, awe-inspiringly rigourous.

He's also spectacularly wrong in his predictions. He expects it will be a simple matter to build a Turing test passing machine with 125MB of memory (and he thinks we could probably manage with 1.25MB) by 1999. Sigh.

He also is clearly talking about a computer passing the Turing test ("winning the imitation game") by trickery. Note that in response to "but a computer cannot make mistakes", he says:

It is claimed that the interrogator could distinguish the machine from the man simply by setting them a number of problems in arithmetic. The machine would be unmasked because of its deadly accuracy. The reply to this is simple. The machine (programmed for playing the game) would not attempt to give the right answers to the arithmetic problems. It would deliberately introduce mistakes in a manner calculated to confuse the interrogator.

The question arises how we are to read "deliberately": whether the machine, sentient "on its own terms", jovially consents to play the imitation game (though it could just as well have some other, equally sentient-seeming, conversation in which it did not impersonate a human but rather spoke out of its own mode of existence, in terms we would understand); or whether, instead, the machine is designed *precisely* to win the imitation game, and thus optimized to give wrong mathematical answers insofar as that it allows it to seem more human, because seeming human is its sine qua non.

I think Turing is talking about the latter; and I think he is making the extremely philosophically radical claim that *that* would constitute thinking.

I think the reading of Turing as saying the former is natural in the context of the mythic yearning, specifically as stabilized and institutionalized by science fiction as a genre.

It's not that I think the former is impossible and the latter possible. I think they may well both be possible.

It's not that I'm just saying the former would take more computing power than the latter.

It's that the difference between the two rather brings up the Wittgensteinian question. We (and Turing) specifically know exactly what we mean by a computer precisely impersonating a human. We have only a fuzzy, romantic notion of what it would mean for an AI to speak "out of its own mode of existence", such that we could speak of it as having *chosen* to play the imitation game.

And I think it's crucial to contrast the actual metric Turing imposed -- the computer fools us into thinking it is actually a human -- with what the Turing Test has been vernacularly regarded -- and which Ted proposed to me in email in 2003 -- which is something like "we talk to the computer (or alien) for whatever length of time, and ultimately conclude that it is sentient."

So there are three tests on the table:

1. Original Turing Test: The computer can impersonate a human being.

2. Generalized Turing Test: The computer can convince us that it is sentient.

3. Hedonic Turing Test: The computer is "fun to talk to" -- emphasis equally on FUN and TALK; that the computer affords us the exogamous pleasure of the feeling of conversing with an alien intelligence.

I think each of these suggests somewhat (maybe radically) different research approaches, actions, and futures.

#1 is the most well-defined and strictly measurable.

#3 is probably at least indirectly measurable, though as Matt points out, it is subjective and perhaps too low a bar (the AIs in some games may already be "fun to talk to", so that the word "fun" needs some work.)

#2 has the problem of being circular; it begs the question. But maybe that is also its strength. Maybe "sentient" or "intelligent" are ultimately provisional, political, contingent categories that resist definition or metrics. Perhaps like love or sovereignty, the issue can only ever be decided, in the real world, on a case-by-case basis.

There is an ironic parallel here between the history of AI and the choices of metrics. If #2 turns out to be the only ultimately useful metric, it will simply mean that the cognitive functions used to determine sentience are not those of formal, conscious, rule-based logic, the ones which Turing and the other early founders of AI were convinced would quickly solve problems like facial recognition and block-stacking. Rather, as with those problems, precise algorithms will turn out to be a dead end. Rather, we will use our massively parallel neural nets, overlay our impressions of the candidate-for-sentience with our memories and sensations, sniff, scratch, grunt, and say "yeah, that feels right".

(And I've sort of elided, here, the material operations of power. Viewed from another angle, as with sovereignty, acknowledgement of sentience will ultimately be conferred on those who can effectively secure it, in the world of power relations. But of course our ethical and aesthetic judgements do form a somewhat independent part of that world...)

Posted by: Benjamin Rosenbaum at October 5, 2006 12:10 PM

Here's another quote from Turing's paper, about the possible objection that machines could never enjoy strawberries and cream:

The inability to enjoy strawberries and cream may have struck the reader as frivolous. Possibly a machine might be made to enjoy this delicious dish, but any attempt to make one do so would be idiotic. What is important about this disability is that it contributes to some of the other disabilities, e.g. to the difficulty of the same kind of friendliness occurring between man and machine as between white man and white man, or between black man and black man.

It's an indication of the times that Turing wasn't even going to compare interracial friendship with intraracial friendship. Nonetheless, what I read here is an attempt to avoid judgements based on appearances.

Think of Christy Brown, author of My Left Foot, who was dismissed by many as an imbecile because he couldn't move or make intelligible sounds, but eventually proved capable of communicating by writing. Think of Jean-Dominique Bauby, who wrote the memoir The Diving Bell and the Butterfly after a massive stroke left him with only the ability to blink one eye. It would have been very easy to dismiss him as brain-damaged. What convinced everyone that these men were sentient and thinking was their ability to converse.

The ability to converse is not a perfect criterion. One obvious shortcoming is that someone might not be able to blink even one eye, but still be sentient and thinking. But because it is such a high bar to clear, the ability to converse is a much less subjective criterion than, say, reading the look in a person's eyes. And that is why I feel comfortable in saying that those who would have dismissed Christy Brown and Jean-Dominique Bauby as mindless were actually wrong, instead of right according to their own criteria of sentience. The ability to converse, subjective as it is, is evidence of something important.

Posted by: Ted at October 5, 2006 02:03 PM

Yes, I agree with all that. I think that is the spirit in which Turing's essay is intended, and certainly the ability to converse is the primary way to judge sentience.

The devil is in the details. Tests #1, #2, and #3 above are all modes of measuring "the ability to converse", but I am arguing that they may lead to radically different results.

An AI is not an adult person -- with an education, a culture, a body -- who becomes disabled.

Turing uses the even closer example of Helen Keller -- who had less access to "education and culture" (though of course she was a human situated in a human culture -- wearing clothes, eating at a table). If we are to believe the play, even in her case an essential key to verbal communication was that she had already learned the word for "water" before becoming blind and deaf.

And even had she not, her brain was already human, thus intricately wired and balanced for communication with humans, in a human body. It was very easy for her teacher to tell what she was feeling when she was observing her -- when she was exhausted, when enraged, when excited, when confused -- because of the similarity of their bodies. It was also obvious to her teacher what kinds of things Keller would find pleasurable, what kinds of things displeasurable. It was obvious, in some sense, what Keller *wanted*.

An AI is a very different case. And to say "well, we'll just program it to want what we'd like it to want" would be, I think, to beg the question, and to ignore the history of the Goofy AI Models outlined above.

Imagine teaching Helen Keller to talk if she had never yet learned the word for water, or the idea of there being such a word, and her body was so unlike yours that you had no idea what she was feeling, or what she wanted. That, it seems to me, would be the best analogy.

In the case of someone who can only blink an eye, it's the *communication channel* which is damaged. The people on either side of that damaged channel know what the goal is, and share a common language. In the case of AI, the channel is okay, but what's on the other side is alien.

We learn language through human relations. We learn it as beings situated in culture. We learn it through our bodies.

AI will not just be a matter of figuring out how to let robots converse. It will be a matter of creating embodiments for the AIs, and a shared culture of the AIs and us, which allow for the possibility of conversation.

Posted by: Benjamin Rosenbaum at October 5, 2006 02:22 PM

On further consideration, I think that I’ll change my word cheating, which has a fairly game-specific usage which appears to apply narrowly to the Turing game, to something closer to fucking up, at least in the sense of fucking up and then claiming that the fuck-up was not, in fact, a fuck-up but was the goal all along and rearranging one’s perception of the universe to make that plausible. I think that’s an important part of conversation, of interaction, of learning. Turing test stuff.

I think the emerging anecdote about the previous conversation is relevant, here. Ted says A to Ben, and Ben says Not A. Three years later, Ben says A, and Ted says “I told you A three years ago”, and Ben says “Really?”, and Ted says “Yes, I remember it and you said Not A” and Ben is all “No way” and Ted is all “Way” and Ben is all “was I drunk?” and Ted goes “no, man, that was the pig” and they laugh and laugh and laugh. That’s what I call passing a Turing test. Ben, I am willing to bet five dollars that you exist (reserving the right to define exist however I want so that I am bound to win under any circumstances).

Can we create a thinking machine which remembers input without remembering that it was input, and which sorts through all the input that it doesn’t remember being put in, and when it is faced with an apparent contradiction between current output and previous output is able to rationalize that contradiction in a plausible manner, without sulking, and include that rationalization as new input? How much would it cost? When the attempts fail, how will we know how close they were to succeeding? When you tell it to design a bridge, and it writes a song, and it tells you that it knows the song sucks, but it keeps trying to fix it? How do you get that grant renewed?

Thanks,
-V.

Posted by: Vardibidian at October 5, 2006 02:37 PM

Did I saw "eleven more messages?" Because when I wrote that, the little link said "Comments (22)." Whoops, I must have meant "thirty-nine."

I have no idea whether Vardibidian's post here was the first to pioneer that rhetorical form, but I acclaim him as the more proficient user and exemplar. It's probably a sign that this comment will be horribly scattered that I'm not trying to use it again now.

*clears throat*

Teapots: ' "So you adjust a little clock on the side and you... why are you looking at me like that?" '

Philosophical Zombies: ' "Communication is violence," confesses Zombie Derrida.'

Everything is true: 'From an esthetical point of view that favors simple explanations of everything, a set-up in which all possible universes are computed instead of just ours is more attractive. It is simpler.'


Back to Goofy AI Models 1 through 3 -- really just 2 and 3, since 1 *is* truly just goofy. Something you might be missing is that, much like the ultimately impractical mythic drive of putting humans on the moon, the mythic drive that led to GAIMs 2 and 3 produced a lot of good side research, and if you're going to argue against the pursuit of (or argue that people with limited finances won't pursue) Turing AI, you're tackling the bigger question of why pursue *any* pure reasearch at all?

I suspect you're aware of and aren't really making so strong a statement as that. Still, you wave away too easily the profound shift in capability offered by forking one's consciousness or recording it and examine its processes non-destructively, intractable as it may be to intuitive readings. (From other threads, I know you would consider this a fundamental misunderstanding of self, and I grant you that -- but you're not making that particular argument here, you're making the argument that "we'll be math!" doesn't really get us much. Even a direct sensory/performative API would be phenomenally utopian.)

Regardless, you're right that GAIM3, as you frame it, has serious limitations that already make it worth the 'Goofy' moniker as far as strong AI is concerned. The problem is that the training process as you've described it suffers precisely the same flaw that you criticize in the Turing Test itself: it treats the whims of the human observer as the entirety of the problem domain, and it assumes humans that know best.

I'm hopelessly out-of-date with AI research as it stands now, but I've seen several papers from the early 2000s that deal with intrinsic goal-setting and reward. The most evocative of these, to me, are Schmidhuber's two constructions of "curiousity": a trio of neural networks in which the confidence network rewards the action network for actions that improve the world-modelling network (I, personally, think this could be simplified to two networks, but I'm not the one with a CS PhD) and a pair of competing agents that have to agree on all external decisions but which bet against each other about the future state of arbitrary cells in memory.

Speaking of betting, another intrinsic drive that would be trivial to implement is the monetary one. Getting volunteers on the internet to reward and punish your AI is vulnerable to vandalism. But if the 'reward' button is backed up by a PayPal account... In my utopian SF scenario, I'd see the early explosion of humanotropic AI dominated by beggarbots which will offer to do things that an internet-embedded problem solver does well in exchange for cash for hosting/processing/bandwidth. But who pays for *any* service on the internet these days? More realistically, I'd expect that the rebuttal to the "humanly conversing AI is not financially feasible" argument, if it comes, will come from spammers.

Posted by: Dan Percival at October 5, 2006 03:08 PM

Argh, teapots link didn't work. Probably my fault.

Trying again:

Teapots.

Posted by: Dan Percival at October 5, 2006 03:11 PM

Dan, thanks, this stuff is great.

I don't mean to sniff at pure research, nor at mythic drives as a motivator for it. Yes, lots of great stuff was and is still achieved in the pursuit of GAIMs 2-3 (and some great SF written in the shadow of GAIM1). I'm all for it. It just seems to me that to the extent we clarify the philosophical model we build up around the mythic drive, we will get more bang for the buck in terms of our research dollars.

And I'm certainly not arguing that we can't do really cool stuff with modelling the self, or translating portions of consciousness into software. (In fact, that's why, in Vinge's terms, the "IA" route to the singularity -- us getting increasingly cyborgy, prosthetic by fashionable prosthetic, until we are the AIs -- seems much more likely than Turing's "we build something distinct from us which thinks" model).

Though:
. Even a direct sensory/performative API would be phenomenally utopian.
... or phenomenally dystopian.

Now Schmidhuber's project -- that's what I'm talking about! My critique is precisely that the Turing Test, as leitmotif for AI, "treats the whims of the human observer as the entirety of the problem domain, and it assumes humans that know best."

An intrinsically goal-setting system of Schmidhuberian design seems like it's going to get much more quickly towards being *truly* intelligent -- "I know it when I see it" intelligent -- *even* if it can't converse. And, you're right, like anything else, porn, spam, games, and scamming is the arena where, if anywhere, there will be an intrinsic evolutionary drive towards humanlike AI.

But even there it depends on how hard it turns out to be.

I think possibly more likely is a spectrum of quasi-intelligent systems interacting with each other and with humans -- with the ones closest to humans (most Turing-test-compatible) being actually the "thinnest" (most specifically optimized for certain domains of conversation) and some of the least-Turing-test-passing ones (or subsystems), the most alien ones, being the ones we begin to suspect of sentience (read: begin to have trouble explaining the behavior of without invoking sentience).


Posted by: Benjamin Rosenbaum at October 5, 2006 03:39 PM
Everything is true: 'From an esthetical point of view that favors simple explanations of everything, a set-up in which all possible universes are computed instead of just ours is more attractive. It is simpler.'

ExACTly my point.

Posted by: Matt Hulan at October 5, 2006 03:45 PM

Great conversation! Sorry to show up late to it.

Re "fun to talk to": as someone noted, that's insufficient as a sole criterion; Eliza is fun to talk to in the short term, and Racter presumably much more so.

Re impersonation: starting with the gendered version of the impersonation game (man tries to convince interlocutor he's female) does seem to suggest that Turing may've been thinking of a sentient computer that is just putting on a show to convince the human interlocutor it's human. But I haven't yet read the paper in full, so the gendered version may be irrelevant to what he finally comes up with.

Posted by: Jed Hartman at October 12, 2006 11:33 AM
<< Previous Entry
To Index
Next Entry >>