mindstalk: (Default)
[Edit: I found a transcript of a similar though short talk from Stanford 2000:
http://technetcast.ddj.com/tnc_play_stream.html?stream_id=256 ]

As mentioned, this happened yesterday at A-Life X. He said it was a longer version of what he gave at the Singularity Summit in Stanford in May, and now I understand why the extropians list wasn't in as much of a tizzy as I expected -- I'd thought he'd be more harshly skeptical.

Executive summary: he largely talked about Ray Kurzweil's books, and his own reaction to the ideas, and how it seems like a confusing (to him) mix of crackpottery and seriously referenced material, and he doesn't know quite what to make of it, but thinks it has to be taken seriously.

"Do I believe in the Singularity? I don't know. But the ideas aren't entirely cracked. And even if I say I think the Turing Test will be passed 100 years from now, or 500, that's just putting off the scenario." -- my paraphrase. 


Longer recap, aka dump of my notes:

Larry Yeager did the intro, in the style of Doug's intro of Dan Dennett a few months back, and using Doug's time-honored Egbert alter ego. "Today we'll hear from Egbert B. Gebstadter, author of Copper Silver Gold, The Mind's U (with Denial Dunnitt), Themamagical Memas, and Ambifoni (about sounds interpretable as multiple words)". Someone really cracked at Ambifoni, I think Rob Goldstone. "Oh, but it turns out Egbert couldn't make it, he's at the B-Life C conference in Perth (those biologists, they've been at this for 100 years!) so instead we have Douglas..."

One serious thing out of all that: people often have trouble summng up GEB; Doug was quoted describing it as "How is it that animate beings come out of inanimate matter?" Also it was said that FCCA was the first book sold on Amazon. Anyway, on to the real talk:

Doug got sucked into all this in 1999, by Kurzweil's Age of Spiritual Machines, and Hans Moravec's Robot. He knew Moravec, and that Hans was head of CMU Robotics, and heard of Ray's speech recognition work and such, so knew these were serious people. He said he felt he's developed a fine sense for sense vs. nonsense, stimulated by the tons of mail he gets, much of it cracked. But (to insert my own metaphor) these books made his needle oscillate widely.

He mentioned having been sensitized by his experience with Doug Cope's EMI (for my other readers: a program which can make new musical works in the style of say Chopin, working from Chopin's corpus, using what I recall as cut and paste plus some higher level statistics; disturbingly well for Doug, though I'm inclined to think that it just shows musical syntax isn't as constrained as linguistic.) Then a 1995 Scientific American article with a graph of peak chess performance over time. Humans were a flat line, while programs were a straight but sloped line. Grandmasters predicted the computer line would bend below the human line, but what actually happened was intersection on schedule. Ray was quoted as noting that exponential growth looks small until the last minute -- if lily pads are spreading in a pond, five generations before full coverage has only 1/32 coverage, but it'd be a mistake to think they'll stay small.

One argument against the Singularity case is S-shaped curves, where you have exponential growth for a while but it levels off. Doug showed a graph of Ray's "Law of Accelerating Returns", with a whole series of S-curves, which give overall long-term exponential growth (especially with the curves getting shorter and taller as time goes on.) This probably support Doug's point of having to take Kurzweil seriously: any obvious objection you think of has already been addressed.

Doug read some pages from Kurzweil on nanotube research, which was more substantial than I'd heard of -- transistors, bulk growth, 2-4 years ago. We saw a MIPS (million instructions per second) per $1000 graph, with "the whole human race" (1e25 MIPS) for $1000 in 2060. Ray's extrapolation was exponential on a log plot which is just scary. The actual data points didn't deviate much from linear -- but I'd note that a linear extrapolation gives 1e25 MIPS/$1000 in 2120.

Doug showed a bunch of cartoons of the evolution of life, presumably his own, making some points, first of all that given life arising from dead matter at all ("ribo, ribo") or climbing onto land ("ribbet, ribbet"), Ray's claims of life changing substrate ("robot, robot") aren't that wacky of surprising. The next three cartoons were mocking Ray: a fox (limit on exponential growth) thinking "rabbit, rabbit", poking fun at Ray's Fantastic Voyage and hope for immortality via a first bridge of taking two hundred and fifty pills a day ("Ray bet, Ray bet"), and a crash of cyberheaven ("reboot, reboot"). Three cartoons praising, three mocking -- deliberate ambiguity.

At the Singularity Summit the panelists had been asked to estimate when the Turing test would be passed, and whether this would be good or bad. Ray had said 2029, and good. Doug said he didn't know, and would regret any answer immediately, but since he was forced said in 100 years, then regretted it. He didn't mention good vs. bad, but noted to us that it kind of doesn't matter if he said that or 500 or 1000 years; if you believe in the Turing test being passed at all, much of the Singularity essence follows that. (But, I'd note, the near-term and uploading of human brains are important parts of many version of the Singularity, including Kurzweil's.)

He belatedly defined what the Singularity *was*, without reference to Vernor Vinge. First reference was to von Neumann, who apparently said something about hitting a technological singularity, in the sense of general fast rate of change. I. J. Good got credited with the chain of intelligence increase, that making a machine smarter than humans would be the last invention we'd need to make, and it would spiral on from there.

Q&A session:

Q: Who should we trust? I've been asking around at this conference, and almost everyone is very skeptical. Should we trust the people actually close to the research, or think they're too short-sighted, or frustrated by their own problems, to see the big picture.

A: Good question, and I don't know. I've asked John Holland, and he's very skeptical. But he didn't actually refute Ray's points, and just made irrelevant comments.

Q: Just because we get all that hardware power, if we do, doesn't mean we'll be able to program it intelligently.

A: Indeed, but Ray has a long chapter on just this, largely on brain modelling. Again, you can't catch him out that easily.

Q: If this Singularity is that close, you'd think we'd see some inkling of AI on supercomputers.

A: [Answer not recorded. But I'd say that the supply of supercomputers is limited, and AFAIK used more for weather or weapons simulations than AI -- we'll get no inkling if they're not being used for AI at all. And IBM's started that cortical simulation project.]

Date: 2006-06-08 18:24 (UTC)From: [identity profile] schenker28.livejournal.com
Thanks for the recap!! Interesting stuff. I haven't read the Singularity book -- have you?

Date: 2006-06-08 18:39 (UTC)From: [identity profile] mindstalk.livejournal.com
No, I haven't read any Kurzweil. I've been on the extropians list since 1993, reading some pop book on the Singularity didn't seem a high priority for me. Coals to Newscastle.

Date: 2006-06-08 22:11 (UTC)From: [identity profile] divineaspect.livejournal.com
It's a pretty good primer, both accessable to people for whom the concepts are new (Including myself), and with enough historical evidence to back up the trends he's predicting.

Date: 2006-06-08 21:44 (UTC)From: [identity profile] natowelch.livejournal.com

"Q: If this Singularity is that close, you'd think we'd see some inkling of AI on supercomputers. "

from earlier:

"Ray was quoted as noting that exponential growth looks small until the last minute -- if lily pads are spreading in a pond, five generations before full coverage has only 1/32 coverage, but it'd be a mistake to think they'll stay small."

I don't even have to say anything, do I? ^_^

Date: 2006-06-08 23:09 (UTC)From: [identity profile] mindstalk.livejournal.com
Heh.

Of course, 1/32 is something, and more generally growth trends should be visible; lots of people will say there's been no progress in AI. Which is probably a crock, but I tend to not be as up on the cutting edge as I should be, and resort to mumbling about being hard to tell the difference between 1e-9 and 1e-6 of a human.

Profile

mindstalk: (Default)
mindstalk

June 2025

S M T W T F S
123 4567
891011121314
15161718192021
22232425262728
2930     

Most Popular Tags

Expand Cut Tags

No cut tags

Style Credit

Page generated 2025-06-05 10:48
Powered by Dreamwidth Studios