Tuesday, November 22, 2011

Does Siri have Buddha nature?

One of the key aspects of my livelihood involves, from time to time, the acquisition of consumer technology both to see what I've (in part) wrought, and to see what the general state of the art is. And so it is I am in possession of an iPhone 4s. Siri "lives" amongst a bunch of computers somewhere - it will not disclose its "location" (if indeed it has a single such location.)  It sometimes gives seemingly witty answers to questions, and without a wireless connection is at a loss to give help. Also, sometimes Siri gets too many requests and requests to be left alone for a while. There have been reports that people feel more attached to their iPhone 4s with Siri than previous iPhones; apparently the voice interface gives some kind of "humanity" to the device. I have to say that I see this, though the genius of Siri is fundamentally to break the interface, clumsy for many, that a device in the form factor of the phone inevitably presents if limited only to tactile inputs. (Too many reviews of the device have focused on the "gee, that's a threat to Google" aspect and have completely ignored this aspect, which is actually far more important.)

Siri is admittedly pretty crude for a human simulacrum. But, as Kevin Drum notes, computers becoming as smart as people isn't that far away.

In 1950, true AI would look like a joke. A computer with a trillionth the processing power of the human brain is just a pile of vacuum tubes. In 1970, even though computers are 1000x faster, it's still a joke. In 1990 it's still a joke. In 2010 it's still a joke. In 2024, it's still a joke. A tenth of a human brain is about the processing power of a housecat. It's interesting, but no threat to actual humans.

So: joke, joke, joke, joke, joke. Then, suddenly, in the space of six years, we have computers with the processing power of a human brain. Kaboom.

Here's the point: technological progress has been exactly the same for the entire 80-year period. But in the early years, although the relative progress was high, the absolute progress was minute. Moving from a billionth to a trillionth is invisible on a human scale. So computers progressed from ballistics to accounting to word processing to speech recognition, and sure, it was all impressive, but at no point did it seem like we were actually making any serious progress toward true AI. And yet, we were.

Assuming that Moore's Law doesn't break down, this is how AI is going to happen. At some point, we're going to go from 10% of a human brain to 100% of a human brain, and it's going to seem like it came from nowhere. But it didn't. It will have taken 80 years, but only the final few years will really be visible. As inventions go, video games and iPhones may not seem as important as radios and air conditioners, but don't be fooled. As milestones, they're more important. Never make the mistake of thinking that just because the growing intelligence of computers has been largely invisible up to now that it hasn't happened. It has.
 In fact, Drum is pessimistic: the fact that computers  - millions of them, in principle - can be networked means that  "computers" becoming as smart as people is already somewhat near reality.  That is the computational power of many computers can be leveraged already to produce results that would be impossible for any idiot savant to solve in a lifetime.

Does that imply, in any way, sentience?

The answer, at least from a scientific, and/or phenomenological point of view, is still going to be "can't say." (Read Douglas Hofstadter and call me in the morning1.)  And in a sense, it doesn't really matter because our lives are still conditioned as they are; we are replete with the senses and volition and consciousness, and such, and the fact that there are really smart computing machines out there doesn't diminish that.

Let's put environmental and social issues aside momentarily. (The damn things are quite inefficient relative to us organic computers, and such technology inevitably creates further class and social divisions.) Rather than ponder whether an intelligent computing agency could approach human sentience, it's more appropriate to consider what we are and can do, and to perhaps have a bit greater humility because of our diminished place in the ecosystem of existence, encroached upon by advances in evolutionary biology, animal sociology as well as artificial intelligence.

That even with such smart machines that there will still be things beyond their capability (for now) shouldn't be cause for a "human of the gaps" view of ourselves; we should however, focus on all the stuff we can do in this space and time.

______
1. Note: the talk linked to above, well, I disagree with quite a bit of it, actually. Especially, if Hofstadter's representing Kurzweil corrrectly, the latter completely doesn't quite get what the genome actually is, and in particular the "information" containable in the genome isn't actually the totality of all information present in a human being. They'd have done well to have Richard Dawkins at that talk, or better, somebody who actually understands genetics than I do. And my grasp of genetics as information theory isn't all that grand.  And of course, like Hofstadter, - who is way too polite, I think,  I balk at the notion of an environment that can sustain an arbitrarily large amount of computing power, as well as a whole host of other Kurzweil bunk.  But you probably knew that.

No comments: