Tue
Jul 22 2008 5:38am

The Singularity Problem and Non-Problem

I mentioned in my post on Vinge's A Deepness in the Sky that I don't believe the Singularity is a problem. Commenters Dripgrind and Coveysd asked about that, and I decided the answer was worth a post.

Vinge came up with the Singularity in Marooned in Realtime (Analog, May-August 1986; Bluejay, 1986), which I read in 1987 when it came out in Britain. I thought then that the Singularity was a terrific SF idea—the idea was that technological progress would spiral so fast that something incomprehensible would happen. In the book, most of humanity has disappeared, and the plot concerns the people who missed it. (Incidental on-topic aside—the reason I re-read Marooned in Realtime is for the journal of one of the people who missed it. The plot, the ideas, the other characters have all worn fairly thin over time, but Marta's journal as she lives alone on a far-future Earth remains compelling.) I was astonished at reaching the end of the book to discover a little afterword in which Vinge claimed to believe in the coming Singularity. I thought it was a great idea for a story, maybe even two or three stories, but too obviously silly for anyone to really believe.

The Singularity, seen from 1794Since then, the Singularity has come to be an object of almost religious faith in some quarters. In The Cassini Division, Ken MacLeod has a character call it “the Rapture for nerds,” and that's just how I see it. 

I understand how Vinge, a brilliant writer who had worked in computing for years, could, in 1986, have seen how incredibly quickly computers had developed, and extrapolated that to other things. I mean it's like someone seeing in 1950 that a hundred years before the fastest speed was twenty miles per hour and now it was supersonic and extrapolating that line straight forward to having FTL by 1983. Nevertheless, I regard this as a kooky belief. Yes, in 1950 we were supersonic, and gosh, we're in 2008 and...we're still traveling in jets only very slightly faster than in 1950, and cars, and subways, and buses. Even computers are only incrementally better than they were in 1987, and this isn't entirely because they're mostly handicapped with Windows. I'm not saying they haven't improved. I'm just saying that if we'd carried on the extrapolated curve between 1950 and 1987 we'd have something a lot better. Instead, we got the internet, which is a lot better, which is a new thing. That's what people do. They come up with new things, the new things improve, they have a kind of plateau. It doesn't go on forever. A microwave is shiny and science fictional but a toaster makes better toast, and most people have both, and few people have much in their kitchen that's much newer. And people are still people, traveling fast, using the net, and though they may go through paradigm shifts, I don't think we'll ever get to the point where understanding the future would be like explaining Worldcon to a goldfish, and even if we did, it wouldn't be very interesting. If you want to argue about how much closer to the Singularity we are than we were in 1987, fine, but I'd suggest taking a look at The Shock of the Old: Technology in Global History Since 1900 by David Edgerton first. But my view remains, nice SF idea, not going to happen.

I wouldn't care at all about people believing in the Singularity, any more than I care about them believing in the Great Pumpkin, if it wasn't doing harm to SF for everyone to be tiptoeing around it all the time. 

What irritates the heck out of me is that so many other people have come to have faith in this, despite zero evidence, and that this is inhibiting SF. It's a lovely science fiction idea, and so are Gethenians, but I don't see people going around solemnly declaring that we must all believe there's a planet out there with people who only have gender once a month and therefore nobody should write SF about gendered species anymore because of the Gethenian Problem. Yet somehow the Singularity resonated to the point where Charlie Stross called it “the turd in the punchbowl” of writing about the future, and most SF being written now has to call itself “post-Singularity” and try to write about people who are by definition beyond our comprehension, or explain why there hasn't been a Singularity. This hasn't been a problem for Vinge himself, who has produced at least two masterpieces under this constraint. But a lot of other people now seem to be afraid to write the kind of SF that I like best, the kind with aliens and spaceships and planets and more tech than we have but not unimaginable incomprehensible tech. (Think Citizen of the Galaxy or pretty much anything by C.J. Cherryh.) I recently asked about this kind of SF in my LiveJournal and only got one recommendation for something I wasn't already reading. Maybe it's just a fashion, but I blame the Singularity—and that, to me, is the Singularity Problem.

113 comments
eric orchard
2. orchard
I love John C Wright's take on this theme in the Golden Age, post-post singularity. It's interesting that this is seen as a "nerd's rapture" I find it a pretty dreary, pessimistic concept.
Paul Arzooman
3. parzooman
If The Singularity happens then let it happen. It can be predicted as easily as the outcome for any sports league's season or the Kentucky Derby - in other words, not quite.

A lot of SF has lost me, not because it's focus has been on the Singularity (and idea I consider sufficiently whiz-bang) but because the stories bore me to tears. Bloated multi-part novels requiring me to read them in some sequence not made plain on the cover are part of the reason. You mention Citizen of the Galaxy - look at its length compared with novels now. It crams all those great ideas and a great story in under 200 pages.

And don't get me started on "Mundane SF". If I want mundane, I'll pay closer attention to my every day life.
Arthur D. Hlavaty
4. supergee
I am a nerd who wants to be raptured. (MacLeod says it as if there were something wrong with it.) I realize that there are many who are not of my faith, but their faith is OK.
Jeffrey Richard
5. neutronjockey
parzooman, Where scifi loses me is where it loses itself: focusing on the technology behind the Singularity and not the humanistic and societal impact. ---> there are some folks out there writing that are getting it right. Thought provokingly right.

I think the inherent problem with singularity is the use of Moore's Law and to attribute only pro-singularity factors into the equation. There are setbacks in science (stem cell research being a fine example...) there are definitely advances in science , but it's the cool accidental discoveriesand unexpected discoveries which by nature can be added into the equation as a factor based on historical data---but you really can't do much but plot the events as they happen.

"Steady on course Captain!"
"Where are we going Boats?"
"Hell if I know Sir, the bow's pointed that way."
NullNix
6. NullNix
Hah. I defy you to find any idea that is too obviously silly for anyone to really believe. People believed von Däniken's stuff. Even flat-earthism still has a community of believers. Somewhere I am sure there are people who worship Donald Duck as a god. (It is a documented fact that there are people on Vanuatu who worship the present Duke of Edinburgh as a god, or at least who claim to do so.)
Fred Coppersmith
7. FCoppersmith
Don't blaspheme His Duckyness, please.

Personally, I love singularity, post-singularity, mundane, space opera, or any other kind of fiction. If written well, there's room for all kinds at the table. But I agree with the basic argument here: if the singularity nudges every other idea away from the table, that's bad for the genre.
eric orchard
8. orchard
My concern with singularity stories is that they are not engaged with the issues of being human but rather present an escape from those questions and concerns. Especially as they become more codified or stylized.
Alison Scott
9. AlisonScott
I think the problem I always have with the Singularity is that it's founded on a premise that if you supply human beings with an infinite stream of information they'll make effective use of it.

We see already that most people are receiving more information than they can sensibly process, hence the rise of garbage science. We used to rely on structured filtration systems to tell us which information was worth paying attention to. We now have new sorts of filtration systems, but they're highly imperfect; some of them tell people that MMR is dangerous or that there's a global conspiracy to poison us all with lightbulbs.

Those of us who are much better than average at making connections between disparate information, critically discarding information that seems to us to be pointless, and then effectively using the remaining knowledge and connections to good purpose should be able to excel. But those things are hard to do, and I am by no means convinced that we have more people doing them than the number of 'great thinkers' we've ever had.

Jared Diamond, writing about the people of New Guinea, makes the point that they absorb, process, and retain just as much information as we do, but much of the information is relevant to the natural world around them and to the detailed workings of their society. We can, if we wish, learn and see exactly the same things, but in practice we don't and we would be unable to cope. And he always stresses that finds he has much in common with people who've grown up in these tiny villages.
Paul Arzooman
10. parzooman
AlisonScott: The primary use of any new technology will always be porn (The First Law of Monkeydynamics).
Stephen Covey
11. coveysd
I think we all agree that any idea (the Singularity included) that dominates the SF genre is a problem. Much of the specal value of SF is in its ability to open the minds of our readers, and any single focus is bad.

As previously noted, I happen to be one of those people who fears the possibility of the Singularity. But the science fiction genre must explore the implications of important ideas, in as many directions as possible. Only the future will tell which path became reality (or not, as you pointed out).

But the Singularity is not alone as a dominating meme.

How about the Fermi Paradox? If we can't write SF without paying homage to the Singularity, how can we possibly write about alien civilizations?

Or the speed of light? Should ALL science fiction explain the technology used to bypass this most serious physical limit, or can we simply continue to wave our hands, invoke a nebulous hyperdrive, and write our stories? Few SF writers fear that use of a faster-than-light drive will turn our writings into a pure fantasy.

My backgrounds are physics and computers, my hobby is cosmology. I see the potential of our computers passing us by as frightening. I see the speed of light as an unbreakable commandent. I see the Fermi Paradox as unbelievable--but where are they?

I will, however, continue to write stories where I freely choose to ignore the facts as we presently see them. And I'll write others that explore these problems. That's what writers do.
NullNix
12. Randolph
Yet if "strong AI" (that is, superhuman computer intelligence at our command) is possible with currently predictable technology, and it is Vinge's field, the Singularity is apparently unavoidable. And if strong AI is not possible with our current technology and understanding, well, why not? A proof of that would be a remarkable result in itself; it would probably be proof that we are living in some version of a Zones universe, though the Zones probably would not be spatial. There's also a human story here, which Vinge alludes to in Deepness; our best researchers have broken their hearts on the problem of AI. At least one researcher in a related field has actually turned to a mathematical version of mysticism (and sometimes I suspect he's on to something.)

BTW, I believe the description "rapture of the nerds" is due to Tim May, who occasionally posts at ML.
Christopher Davis
13. ckd
AlisonScott:
I think the problem I always have with the Singularity is that it's founded on a premise that if you supply human beings with an infinite stream of information they'll make effective use of it.
One of the prongs of the Singularity is AI; the other is intelligence amplification (see Vinge's "Bookworm, Run!" or, for that matter, this Making Light thread).

That's not to say that I necessarily agree that there are (or ever will be) tools that will really make us so much better at processing information as to lead to a Singularity, but "first you get the processing capacity, then you get the firehose" at least considers the issue.
NullNix
14. dripgrind
There are good reasons to be sceptical about the Singularity (at least as conceived by ra-ra transhumanists), but your argument in this article isn't really doing it for me.

Let's start here:

"I mean it's like someone seeing in 1950 that a hundred years before the fastest speed was twenty miles per hour and now it was supersonic and extrapolating that line straight forward to having FTL by 1983."

It was certainly wrong to make that extrapolation for transport speeds (I vaguely recall seeing a similar techno-utopian projection of energy usage, made in the 50s, which claimed that every person would have access to a whole sun's worth of energy by 2100 or something - it didn't specify from where.)

However, progress in computing has followed a very different pattern. If you'd made an equivalent projection in 1970 for progress in computers, you'd have been right.

"Even computers are only incrementally better than they were in 1987, and this isn't entirely because they're mostly handicapped with Windows. I'm not saying they haven't improved. I'm just saying that if we'd carried on the extrapolated curve between 1950 and 1987 we'd have something a lot better."

This is plain wrong. If we'd extrapolated processor speed on a straight line curve between 1950 and 1987, you'd have extrapolated something a lot *worse* than the computers we have now. Orders of magnitude worse. I mean, that's the big deal about Moore's Law, right? Exponential growth. We've all heard the line that if cars had progressed at the same rate as computers have they *would* go at a billion miles an hour at a million miles per gallon.

The point is we *have* experienced sustained exponential growth in computer power and there's no sign it's stopping (Moore's definition was strictly to do with the number of transistors on a chip, but we've had orders of magnitude growth in network speeds, how many flops you can get per $ invested, storage capacity).

Now as you point out, being able to run Windows faster and faster is not in itself going to transform society. But in denying that computers have made any impact, you say "*Instead*, we got the internet, which is a lot better, which is a new thing." To oppose "the internet" to "computer progress" is pretty bizarre; what do you think the Internet is *but* a second-order consequence of cheap, fast personal computers and massively increased communications bandwidth? Google couldn't exist without the ability to make hugely powerful computer systems out of stacks and stacks of commodity hardware.

You've missed the key point of Vinge's Singularity argument by using a sloppy definition right at the top of your article. The Singularity argument is not "technological progress will spiral so fast that something incomprehensible will happen". It's that *if* we can create an AI with greater than human intelligence, then that AI can innovate still faster, which means that it can design a still more intelligent device, and a runaway begins.

Now you *could* argue that before we reach that point, we will hit some unforeseen limit of processor physics or software complexity. You haven't made that argument, though, and it's a tricky one to make, because we're practically there now. The Blue Brain project is already simulating human neurocortical columns at the level of neurons. They think they can simulate an entire brain. It's not hard to imagine that with ten more years of progress in chip speed and hi-res brain scanning you could have a human-equivalent AI by simple simulation. And once you have a human level AI, you can wait for Moore's Law and throw hardware at the problem to get an AI that's as smart as a human but thinks twice as fast. Or ten times as fast. Or a community of AIs who all think ten times as fast as us and communicate over gigabit ethernet...

Whether this will lead to a future where the computers take over or everybody suddenly abandons their conception of self to "upload" themselves (ie copy themselves) into robot bodies is very debatable, but I don't think you can dismiss the Singularity argument itself as a "non-problem". And (comms satellites aside), I don't think you can argue that IT and communications technology has already had much more of an impact on our everyday lives than space travel ever had.

"What irritates the heck out of me is that so many other people have come to have faith in this, despite zero evidence, and that this is inhibiting SF."

It's the old-fashioned future of nuclear rocketships run by mainframes that's the "zero evidence" timeline. In fact, there's a considerable weight of negative evidence against it; Stross has made the point here far better than I could, and he doesn't even pull out the big gun of the Fermi Paradox.

Nice SF idea, not going to happen. Or at least, to be believable and interesting, space opera does have to explain what happened to the Fermi Paradox and the Singularity and the fact that FTL implies time travel, and, oops, time travel implies infinitely powerful computers, so the Singularity just happened anyway, except this time it's omnipresent as well as omniscient.

My feeling is that Gibson and Dick called it and Niven and Heinlein were wrong. We're not going to live in a future of Competent Men zipping around by hyperdrive and getting up to high jinks with Pierson's Puppeteers.

We're going to live in a gritty cyberpunk future of corporate government and eco-collapse and prosthetic limbs which are better than meat legs and computers that are smarter than us both.

We're nearly there.
NullNix
15. dripgrind
I'd also like to point out that if the Singularity *is* the Rapture of the Nerds, we're all the damned.

It'll be the ultra-rich who upload themselves first, and when they do, they'll pull the ladder up behind themselves and restructure our society for their benefit. (I mean, more than they already have).
Russ Gray
16. nimdok
If the idea of the singularity is "technological progress would spiral so fast that something incomprehensible would happen," then the human race has already been through several. For example: the discovery of fire. Its harnessing and development, and the uses to which it could be put, would have been incomprehensible to whomever was around before it was discovered.

Another example: agriculture. Without agriculture you have small groups of hunter-gatherers. After agriculture, you have cities and societies, with individuals who specialize in something other than food preparation. A pretty radical shift, and you try explaining metalwork or state politics to a hunter-gatherer. Incomprehensible.

Another example: the printing press and the spread of mass literacy. Not as good an example, perhaps, but look what happened once it became possible to mass-produce the written word, and people could read it. Perhaps the effects were not unimaginable or entirely incomprehensible to the people who could read and write before its invention, but from everyone else's perspective it had a drastic effect.

How 'bout industrialization? Could a person from pre-industrialized Europe have predicted (or comprehended) what would happen in the hundred years between the end of the Napoleonic wars and 1914? There were huge changes in society, technology, etc., with railroads and mechanization replacing animals as the source of motive power, the migration to the cities, and the massive increase in population.

I think we (the human race) have already experienced a lot of singularities, which caused the future to be incomprehensible to those at the time. Where we're at now is not so different, it just seems that way because it's happening to us and not in a history book.
Ken Walton
17. carandol
I was thinking along the same lines as you, Nimdok. And I'd argue that the results of the invention of the printing press were pretty unimaginable -- without it, the Protestant Reformation would almost certainly not have happened, and I don't think there was anyone predicting *that* beforehand.

Also, all the "singularities" you mention have left people behind. There are parts of the world where people still live as hunter-gatherers, parts of the world with agriculture but no literacy, etc. I'm sure that when the next one happens, there will still be plenty of people who don't want or can't afford to upload their minds into the heart of the sun, or whatever the fashion is.

The internet revolution is pretty much an incomprehensible singularity to my parents -- people do weird stuff on computers and they don't understand any of it -- but not wanting to join in doesn't seem to be doing them any harm.
NullNix
18. JS Bangs
About Moore's Law: in fact the rate of growth of processor tech has already slowed considerably, and the end is near. In fact, chip makers are already bumping up against the physical limits of current construction technologies. That's why high-end computers these days don't come with fast chips, they come with more chips, in the form of dual-core and quad-core machines.

The upshot of this is: Moore's Law won't hold forever, and if the Singularity depends on it, then we'll never get there. Of course, I also don't think that we'll ever get there for the simple reason that most of us don't want to get there.
NullNix
19. Ashley Grayson
Some people find the Singularity hard to see, because we are already in it. Not so far in, but in. I've said for about ten years that there's more science fiction in the Wall Street Journal than in most novels, and I've made more money from spotting companies on the singularity elevator than I have selling SF novels, so I can't complain.

In case no one has noticed, while traditional healthcare costs are still rising, much effective healthcare can be bought at RiteAid/CVS/Walgreens for relative peanuts. SF did not predict that a typical GPs office can be outfitted by the local drug store, but many doctors offices use the same blood pressure cuff that anyone can buy for under $50. Part of the Singularity is the availability everywhere of a technology.In 1986 a 20MB hard disk was $3,500. Today an 8GB USB drive is less than $70 and 500MB USB drives are free giveaways at trade shows.

Too much thinking by SF people about the future is molded by Publishing which is the most backward industry in the world. SF is a curiosity of publishing, while the Singularity is an ongoing event in reality. Thanks for naming it though.
Zack Weinberg
20. zwol
I want to jump on just one tiny - but critical - assumption:
...once you have a human level AI, you can wait for Moore's Law and throw hardware at the problem to get an AI that's as smart as a human but thinks twice as fast.
I'm not entirely comfortable with assuming a human level AI in the first place, but that's not the real problem. The real problem is that overclocking a human level AI does not necessarily produce a superhuman AI. Quite the contrary - there are real-world conditions that produce human brains with abnormally high synapse density, neuron conduction velocity, or just about every other obvious simple tweak to brains that ought to make them work better.

And every single one of those conditions produces a crippling mental disability.

This isn't to say that a superhuman AI is impossible, only that I don't believe there is a continuum of simple enhancements to a human AI that will produce one. And that, in turn, means that the runaway progression of enhancement that Vinge talked about isn't going to happen.
Paul Lalonde
21. plalonde
JS Bangs: the thing about the many-core machines is that they are nevertheless increasing the flops per watt number, which is resolving to be the actual fundamental unit of computing. And the drawbacks of many-core software architecture don't apply so much to simulations of physical systems (such as brains, say) where there is inherent latency in the communication between simulation components. These turn out to be embarrassingly parallel and relatively easy to accelerate with the new systems.

Moore's law is becoming a shorthand for the increase in compute power generally accessible, and it's not looking like it will be slowing down for a while yet. It won't make running MS Word any faster, but it will make more forms of simulation and parallel problem solving much more tractable - and that keeps us heading down a path to thinking machines.

My main question is whether we get a singularity-style AI before we reach our power cap.

As far as other commenters go: yes, I agree that we are in an information technology singularity right now that's at least as important as the industrial revolution. It's *hard* to guess what will evolve from this space. I too vividly remember telling a fellow grad student in '93-'94 to "stop wasting your time with that graphical gopher thing". He was right.
Ben H
22. dripgrind
@JS Bangs - Hmmm, interesting that we are actually hitting a limit as far as semiconductors are concerned, albeit in 2018 or so.

Even if most of don't want to get there, if some people do it, can the rest of us avoid being affected?
NullNix
23. John Faughnan
Several writers have made the connection between the Fermi Paradox (a favorite theme in relatively modern science fiction) and the Singularity.

As in

1. All technological civilizations go Singular
2. If anything survives Singularity, it doesn't go a wanderin' anymore.

I think that's a neat space to explore.

As far as Singularity stuff, I think the predictions really boil down to this.

1. We make millions of human-grade processors every day. (ie. babies).
2. We can't find any physical reason that we can't make much more powerful artificial processors. The reasons may exist, but we don't know 'em.
3. The economic value of a super-human processor is unbeatable, so sooner or later we'll create 'em.
4. After #3 comes the Singularity. (Sorry you got so beat up about this Bill Joy, I think you're right).

The "Singularity" may be only a crisis for humans of course. The earth may do just fine.
Arthur D. Hlavaty
24. supergee
dripgrind: The technicians will upload themselves before the rich can.
NullNix
25. aphrael
there's a global conspiracy to poison us all with lightbulbs.

AlisonScott: It seems to me that a world in which imperfect filtration systems give rise to ideas like this is a world in which story ideas are just floating at the surface of the public consciousness.

I mean, really. An actual global conspiracy to poison us all with lightbulbs, foiled by some sub-superhero agent working for the three letter agency of your choice (BATF, for some reason, strikes me as being the right one) ... that could be a fantastic story. :)
NullNix
26. aphrael
Plalonde: one of the things that the multicore architecture is revealing is that software development tools are going to be a limiting factor for some time.

Fundamentally, most programmers just aren't able to handle the complexity of software that is intended to run on multiple processors; and the development and testing cycle for such software tends to be significantly longer than for off-the-shelf business or gaming software.

Until libraries or other tools are developed which simplify the synchronization process to the point where most programmers don't have to think about it, software won't be able to take anything approaching full advantage of the power of multiprocessor systems.
Corey Feldman
27. coreyjf
I agree with a previous poster, we have already gone through multiple singularities. Explain our society to Neolithic peoples, there would be no frame of reference other then magic. A big enough paradigm shift is a singularity. On the one hand I do understand your point, some things are simply human. In some ways group dynamics on a space station would be the same as a grassy plain in our far history. Now will computers, genetics and nanotech come together and change what it means to be human; I agree that is a mater of belief but certainly not unlikely. If we didn’t age and/or death truly lost meaning, there is no way to really know what we would become as a species.
NullNix
28. kennethK
Why I Hate the Singularity

I don't have much time to read short-form SF these days. I'm pretty much limited to the "Best of the Year" anthologies, though I do manage to cram in an issue of _Asimov's_ every now and then. When singularity and post-singularity stories started showing up a few years ago they did nothing so much as annoy me. I just didn't like the stories; they did nothing for me. When a story revealed itself as a post-singularity story, I'd rush through it to get to the next, hopefully better, story.

But why did singularity stories leave me cold? I didn't know for a long time. Then one day last year when I was reading one of them, it hit me:

Singularity and post-singularity stories aren't science fiction.

Now I know that if you ask 10 SF fans to define science fiction, you'll get 13 different answers. Or you'll get the response given by Supreme Court Justice Potter Stewart re the definition of pornography: it's hard to define but "I know it when I see it." But I think that the definition given by Eric Rabkin would be acceptable to most SF readers: Science
fiction is the branch of fantastic literature that claims plausibility for its fantastic elements against a background of science.

Given that definition, do post-singularity stories qualify as SF? I don't think so. Rabkin defines the fantastic in literature as "psychological affect created by the quick, complete reversal of fundamental assumptions held by the reader at a given moment as the reading proceeds. Put more technically, the fantastic is the affect generated by the diametric, diachronic reversal of the ground rules of the narrative world." Post-singularity stories certainly qualify as fantastic under this definition. But the sticking point for them being SF is that "plausibility against a background of science."

Good SF writers have always worked under this constraint. The stories could be fantastic, but the fantastic elements had to plausible given what contemporary science knew and believed possible. A writer could even step beyond what was believed possible as long as the fantastic elements had some grounding, however tenuous, in contemporary science. Scientific accuracy wasn't necessarily required as long as the story possessed verisimilitude.

But is seems to me that writers have used the singularity as an excuse to throw off those constraints. We don't know what anything will be like post-singularity, so anything goes. That may make for a good story, but it does not make for a science fiction story. Post-singularity stories are fantasy, not SF.

But the fact that post-singularity stories are not SF is not the only thing that troubles me about them. There is one other characteristic of post-singularity stories that turns me off: they rarely, if ever, have believable human characters.

Now I know that post-singularity also means post-human. I get that. But post-human doesn't make for a good story, SF or any other kind. In his Nobel Prize acceptance speech, William Faulkner noted that "the problems of the human heart in conflict with itself which alone can make good writing because only that is worth writing about, worth the agony and the sweat." Post-singularity stories can't write about the problems of the human heart, because they contain no humans. No doubt that some of the SF Grand Masters were weak in characterization, but at least their stories had recognizable and believable human beings.

I can only hope the singularity fad will pass from SF writing in short order. The genre will be better for it when it does.
Greg Dougherty
29. GregD
Yet somehow the Singularity resonated to the point where Charlie Stross called it "the turd in the punchbowl" of writing about the future, and most SF being written now has to call itself "post-Singularity"

You know, you need to get out more. Admittedly, most of my SF comes by way of Baen Books, but I haven't read a "post-Singularity" book, ever (unless, Rainbow's End counts), and I read at least 10 books a month, mostly SF.

(Note: I tried reading Charlie's Stross's "Venture Altruist" story, but the economic idiocy was so bad I threw it away after a couple of chapters. The problem wasn't concerns about computer power, the problem was that he has no clue what it takes to make a successful company.)

If you want to talk Tor books, none of John Scalzi's are "post-Singularity", and neither are David Weber's latest series.
NullNix
30. Mike Brotherton
Jo, I think your offhand comment computers not being much better than 1987 is off base. Many orders of magnitude of improvement have occurred, and it's silly not to admit that. Instead you seem to just call the Singularity idea silly, without any serious consideration when it deserves more than you've given it. I'm reminded, unfortunately, of Rush Limbaugh poo-pooing ozone depletion because how can people spraying their hair have any significant effect?

The issue is whether the computer advances will lead to super-human intelligence, and whether or not that will actually lead to any bizarre leaps we can't foresee

I do agree with you, COMPLETELY, about how the Singularity has seemed to consume and warp some writers like Vinge. He continues to write great stories, so it's not terrible for fans, except that there are many other writers influenced by the whole thing.

I've written similar blog posts in the past:

Singularity or Bust: http://www.mikebrotherton.com/?p=216

and
The Future is Now: http://www.mikebrotherton.com/?p=507

My premise is that all the "singularity" really boils down to is that it has gotten harder to predict the future as far out as we once could, and our horizon has tightened considerably. For the near-future science fiction writers trying to get it "right" the Singularity is happening already.
eric orchard
31. orchard
Well, to put what I said earlier another way, maybe the attraction to the post human story is that it is a pure idea story with no problems of humanity to clutter it. I can certainly see that as a major draw for many hard SF writers.
NullNix
32. tigger
I love Vernor and all his books. But SF writers can write about whatever they want to, and don't have to explain a thing if they don't want to.

That's what makes it all fun -- create whatever world you want, with whatever rules you want, and be consistent within that world and those rules. That's all that matters. If they are close to reality, great. If not, who really cares? A good writer is what it really takes to make a story believable to me. I don't care if it deals with singularity or not.
NullNix
33. Mike Brotherton
Oh, and for Vinge's original "What is the Singularity" article, see:

http://mindstalk.net/vinge/vinge-sing.html

And in response to orchard, my first novel Star Dragon (Tor) had post-humans and the characters left some readers cold. My second, Spider Star (Tor), is more like a Heinlein or Niven without too much radical posthumanization. Some readers/reviews liked the second better because the characters were more sympathetic, but some preferred the first because it explored newer territory.

For what it's worth, there seems to be an audience for both.
NullNix
34. Frederick Ross
This is the second time I've heard of 'singularity,' the first being two days ago from a visiting friend. I'm not even sure where to begin with the problems.

First of all, the exponential growth of computing is not a new paradigm, an amazing thing, or otherwise. It is an economic imperative. Decades ago, Carver Meade noticed that the scaling laws for semiconductor fabrication had negative exponents, not positive, in the domain where coarse grained approximations were valid. The increase in computing power was falling downhill. Expecting that there is no equilibrium, or that other technologies will a priori have negative scaling exponents is ludicrous. Expecting that computational complexity will be so is wrong. There are problems that are simply hard. Period. A bigger, faster brain provides only modest improvements. It's diminishing returns before you even begin.

The comparisons with the industrial revolution are perhaps ingenuous, since the singularity is supposed to be a result of a truly different ability in human mental capabilities. However, there is an exact parallel in the invention of algebra, which in some sense powered modern civilization. It meant that modest minds could produce answers to extraordinarily difficult problems with only modest difficulty and some elbow grease. Dijkstra and his cronies have recently laid the foundations for could be a somewhat modest shuffle beyond that step (see www.mathmeth.com and 'Predicate Calculus and Program Semantics' by Dijkstra and Scholten). It may turn out to be similarly transformative, but there is no telling yet.

But the amazing development of algebra has made little difference to the inner workings of man. We still engage in forms of social grooming, ritualized courting prior to reproduction, and a certain degree of labor. This hasn't changed. One commenter pointed out that the elderly now have no idea what youngsters are doing on the Internet. On the other hand, how many would understand the rituals of a drive in movie theatre or a drive up diner with a waitress on roller skates? Or a charity ball in an earlier period? The details of the rituals change, but there has been no particular alteration.

As for simulating human brains, first of all our knowledge of the brain is still sketchy enough where a simulation is unlikely to be accurate, second most of the brain is occupied with processing all the chemical noise flooding in from the rest of the body, and third a human brain is a singularly weak instrument for reasoning. We are not computers. John von Neumann could think many times faster than you, but he had no idea how to design an even faster thinker, which leads me to believe the scaling exponents are positive. 'Strong AI,' as a previous commenter termed it, failed long ago, partially because it became obvious that building a human mind was kind a pitiful thing to aim for, and partially because logic turned out to be largely unrelated to the issue.

It's kind of sad that the concept of singularity has gotten wide currency. As for the Fermi paradox and FTL travel, Fermi's back of the envelope calculation was probably completely off (intelligent metazoa with an obsession with symbolic communication are fantastically improbable and really rather absurd even once life arises), and Alcubierre and others have already put forward several FTL transit proposals consistent with general relativity. Just all of them so far require materials of negative mass and enormous amounts of time to construct the spacetime bubbles.
Matt Arnold
35. Matt_Arnold
Nobody attempted to provide reason or evidence about the existence of Gethenians.

Yes, a Singularity (however one defines it) may not happen. It is a mistake to be certain that it will happen. In fact, there are some good arguments to suggest that it probably won't, especially for the more extreme definitions. But this dismissal is too superficial.
Julian Hall
36. Jules
Plalonde:
JS Bangs: the thing about the many-core machines is that they are nevertheless increasing the flops per watt number, which is resolving to be the actual fundamental unit of computing.


This is true. And simulation of a brain is an embarassingly parallel problem, so multicore machines are ideal for the purpose. But the fact still remains: Moore's Law (the real Moore's Law, that is -- the one referring to the number of transistors that can be produced in a given area of silicon, not the media-invented Moore's Law about processor speeds) cannot continue indefinitely. JS Bangs linked to an article that suggests that the limit will be reached at about 8-16nm feature length. My own previous guess at the limit made it smaller, but still put it at 4nm. And that may well be a hard limit, because try as hard as you want, it's tricky to make a switching device that's much smaller than that, when the atoms you're working with to make it from are only an order of magnitude or so smaller themselves.

What this means, in the end, is that even multicore machines will only become about 30 or so times as powerful as they are today. After that limit is reached, unless and until some totally new computation method is discovered, the only progress made will be (a) small-scale optimizations or (b) by making bigger, more energy-hungry, hotter processors.

I am happy to make the assumption that a singularity might not emerge from computers only 120 times more powerful from those we have today. Therefore, no problem.

: And it places 16nm in 2018, which is a substantial slowdown of current progress, which would put 16nm (approximately 3 generations beyond the current 45nm generation, at a rate of 1 generation every 18 months) in 2012 or 2013.

: 30 if you take the adventurous end of Intel's prediction, about 10 at the conservative end of Intel's range, or about 120 if you take my less conservative estimate.

: And I don't believe quantum computation holds the answer. Quantum computers appear to be (1) fundamentally limited in the types of problem they can solve, (2) extremely difficult to build on a large scale (my gut instinct says the amount of work required to build a quantum computer is probably quadratic with the number of qbits it operates on, thus keeping the value of cost_of_machine * length_of_time_required_to_solve_problem at a similar value to non-quantum machines).

: Physical simulation doesn't appear to be one of them. Physical simulations generally rely on the numerical solution of differential equations (e.g. the Hodgson-Huxley equations that estimate the behaviour of a neuron), and as far as I know nobody has come up with a quantum algorithm for this that is any better than classical ones.
Jack Bell
37. jackwilliambell
I wrote about this in an essay published on HardSF.net titled 'Not a bunch of softies, a rant about Hard SF'. In short? I contend that anyone writing Hard Science Fiction has to take the idea of the Singularity into account, even if only to explain why it didn't happen. However, you are free to do anything you want in any other SpecFic subgenre.

Mind you, in another essay I opined that 'I am not a Singulatarian' and explained why it isn't an article of faith with me...
Julian Hall
38. Jules
Frederick Ross:
Alcubierre and others have already put forward several FTL transit proposals consistent with general relativity. Just all of them so far require materials of negative mass and enormous amounts of time to construct the spacetime bubbles


Unfortunately, there are more problems with the Alcubierre warp drive than just that. The biggest one is that even if you were able to construct the spacetime warping that it requires, it seems there would be no way of stopping it once you had done, because your exotic materials that cause the warp are located in an area of space that is causally isolated from both the contained craft and the outside universe, IIRC.

I think the only remotely realistic FTL method currently considered is the wormhole approach, although I understand this also has a few problems that would need to be solved (beyond its dependence on negative-mass matter).
Arachne Jericho
39. arachnejericho
I am so very confused about all the Singularity stuff being tossed around here, and I consider myself something of a nerd. Though obviously for the purposes of this discussion, the wrong type of nerd to be.

While searching for a guide on the singularity and the fiction that surrounds it (which I hope to get chewing on in September, I suppose) I found Charlie's Singularity! A Tough Guide to the Rapture of the Nerds, which is very tongue-in-cheek and amuses me greatly.

(Also, it uses TiddlyWiki, which is a wiki self-contained within an HTML page through the magic of Javascript. It's a personal wiki, typically, so people in general can't make changes, but it's wonderful for notes.)

Anyways, it looks like what's being talked about here in some cases (including the main article) is about the misunderstanding of the Singularity, rather than the Singularity proper. Which means there are two divergent families of definitions going on in this thread. Twisted.

For the general term of singularity, I agree with previous posters who point out that we've already gone through several of these. I recall from Warren Ellis' Crecy, with a narrator being one of the English archers involved breaking the fourth wall in actually a strangely lovely way:

These things are going to look primitive to you, but you have to remember that we're not stupid. We have the same intelligence as you. We simply don't have the same cumulative knowledge you do. So we apply our intelligence to what we have.


The Battle of Crecy may itself be considered a singularity in some lights, since it (and perhaps some other factors) killed the idea of the knight in armor on a war horse being at all practical.

Warren also %s/Singularity/Flying Spaghetti Monster/g of course.
Lydia Nickerson
40. lydy
I rather like the idea of the Singularity as a component of a story. Watching the author trying to do the ineffable with only effable tools is fascinating. Some of them manage a pretty good job of it, too.

I don't believe in the Singularity. Human kind has been through a lot of changes. Some of these, like taking up agriculture, or the industrial age, vastly upset the culture in which they took place. What they didn't do was change the fundamental way in which people think. If you take someone from a previously undiscovered hunter-gatherer tribe, and move him to a modern culture, it is possible for him to thrive. Possible for him to learn some of another's culture. He has the same human brain that you and I do.

I'm not suggesting that we do any such thing. It's cruel, and smacks of slavery. But we know of many accounts in Britain's imperialist days when they did just that. And some of their experimental animals did thrive. On the other hand, some of them were so homesick they died. Not a pretty experiment.

So, I don't believe in the Singularity because I don't think you can change the human brain that much. And strong AI doesn't look at all hopeful.
NullNix
41. James Davis Nicoll
My premise is that all the "singularity" really boils down to is that it has gotten harder to predict the future as far out as we once could

With all due respect, I like to collect old futurism books and our track record for predicting the future was always pretty craptacular.

Case in point: One of Herman Kahn's books (I think Alternate World Futures) discusses Moore's Law at one point and points out some of the implications . He thought what it would produce was mainframes of prodigious ability. Personal household computers simply never occured to him.

In fact, AWF considers a large number of scenarios for the world of the future (Because Kahn didn't think it was possible to predict the future and yet the idea that Germany could reunite peacefully escaped him and I think so did the possibility that the USSR could disappear in a puff of logic.

1: He also discusses the top speeds humans had reached, which if trends had continued rather than stalling out as they did would have us zipping around at about 0.01 C. His discussion of speeds and computational progress led me to realize that the increasing profitablity and productivity of 19th century textile industry must have inevitably led to the conversion of all organic matter on Earth to cloth, although for some reason this terrible doom was poorly documented.
Avram Grumer
42. avram
I think descriptions like "Rapture of the nerds" confuse the Singularity with Transhumanism. Granted, there are similarities. But "Rapture" implies that the Singularity, if it comes, will be a pleasant time to be human. I know people who believe that the Singularity is inevitable, but fear it. After all, super-smart AIs won't have much in common with the dumb meatbags who want to enslave them, right? There's no reason to expect them to have any more regard for us than the European settlers did for the Native Americans. Or modern humans do for cows.

Maybe it's the Tribulation of the nerds.
Ben H
43. dripgrind
@zwol: Running a silicon simulation of a human brain at increased clockspeed wouldn't be equivalent to either

(a) "abnormally high synapse density" - simply increasing the synapse/neuron density must have implications for the space available for glia and axons and what have you, so you can see why that would affect function. But running the same number of neurons at double speed presumably wouldn't have the same problems.

(b) increased "neuron conduction velocity" (velocity of conduction along axons) - if the speed of axon conduction is increased but the speed of events within the neuron is unchanged, it's easy to see that there would be problems with timing. But if the speed of simulated conduction events increases in step with the speed of all the other events taking place within the brain, then I can't see how function would be affected.

Unless you take a mystical and Panglossian view that the brain is "the best of all possible thinking devices", there must be some configuration of matter that is functionally equivalent to a human brain but faster in operation.

Having said that, you raise a good point that we may not be able to actually make one economically.
NullNix
44. Randolph
Aargh! I wasn't going to write more about the computer science issues; I think the issues that have made anthropological sf much more difficult are the main thrust of Jo's essay and I hope I'll have time to write about them later today. But those take some thought & perhaps research, whereas I have the software issues at my fingertips.

The problem of the Singularity is not the machines, it's that we don't know how to write the software to run these machines and we've been working on the problem for decades. Software development takes too long and is too difficult. This has been a problem of the field for decades. At the same time, if the software problem were resoluble, the Singularity would be near-certain. That's why knowledgeable people "believe". And that we seem to be getting no closer suggests that there is something wrong with our mathematics and our philosophies, and that makes a lot of people very uncomfortable.
NullNix
45. eirenicon
I see a common misconception about "the Singularity" being bandied about. It's important to realize that a technological singularity is not dependent on artificial intelligence. A technological singularity could be brought about by using nanotechnology to improve human brains, for example.

In any case, using the phrase "Rapture of the Nerds" hardly establishes a credible foundation for your arguments. I suggest you read Rapture of the Nerds, Not. The very idea of a technological singularity is not one that can be dismissed simply by arguing about the validity of Moore's law or statistics. It is a great big area of enormous complexity and there are very smart people working in it, people far smarter than me and you, most of whom don't have time to blog about it. And it is a turd in the punch-bowl that can't be ignored. Consider: we humans have computers that are far more powerful than anything we've built, all packed into 15lbs of meat and water, and they evolved out of primordial goo. How could we possibly not do better? Perhaps fifty or a hundred years is too soon, but SF stories set a thousand years in the future have got to deal with singularities, as surely as Frank Herbert (cleverly) dealt with computers in Dune.

Oh: and there isn't just one singularity that you hit or miss. If it doesn't happen one year it can happen the next. And chances are, you won't notice it.
NullNix
46. JS Bangs
the increasing profitability and productivity of 19th century textile industry must have inevitably led to the conversion of all organic matter on Earth to cloth,

Dude, that's a story I want to read.

Anyway, I want to riff on things a few other commenters have said. First: we've already experienced several "singularities" as a result of technical innovation. But these haven't resulted in a fundamental change in human nature. We still poop and frack and fall in love, and I for one hope that none of those things change.

More importantly, even if the Internet is wired into our skulls and everything we think is indexed by the GoogleMind, I still don't think that any of this would change. The singularity supposedly promises a complete replacement of human nature such that we are incapable of understanding post-singularity stories. I doubt that: even if the tech is magic to you, the people are still people.

That's why I disbelieve in the singularity as it's usually described.
NullNix
47. James Davis Nicoll
I just thought of a solution for the Fermi paradox. By definition, we're only looking for social ETs and also pretty much by definition, any we run into will have been around and able to communicate longer than us.

Social beings like to talk. By necessity, there's a minimum amount of energy required to communicate. Also, older beings tend to be a little deaf. Presumably when they talk, they will be louder than strictly necessary and if they have a lot to say, this will necessarily require a lot of energy.

There is unfortunately a way for them to say a lot to us in a very short period of time:

http://en.wikipedia.org/wiki/James_D._Nicoll#Nicoll-Dyson_Laser

Note that this can concentrate all of the power output of a star on an Earth-sized target at a distance of a million light years.

I came up with this as a way to power starships but it can also evaporate an Earthlike world in about a week.

Sadly, what must happen is that as soon as a young species like ourselves makes itself known to the glaaxy at large, the oldest surviving civilization immediately beams out a message pointing out how better things were a billion years ago and how kids there these days don't listen (or at least never reply), and also the final theory of everything, incidentally incinerating the beings that they are trying to talk to (or at least at).
NullNix
48. eirenicon
First: we've already experienced several "singularities" as a result of technical innovation. But these haven't resulted in a fundamental change in human nature.

If they haven't resulted in such a fundamental change, then they were not singularities. A singularity, by definition, is a singular event beyond which all events are unpredictable. The invention of agriculture was not a singularity, because a great many things following agriculture were predictable to those who lived before it (one such prediction may have been "people will still eat food"). The Battle of Crécy was not a singularity; it did not radically change the entire world to an unpredictable state.

It is actually impossible to write post-Singularity fiction with any hope of getting it right. What will people do after a technological singularity? Will they still eat food? Who says there will even be people? If there's any way of knowing, then it isn't a singularity.
Jo Walton
49. bluejo
Coveysd -- I think it's fundamentally different from the Fermi Paradox in two ways.

One is that the Fermi Paradox is a real question. You can plug whatever numbers you want into the Drake equation and even so there's a Fermi Paradox -- where are they? The other is that there are hundreds of creative and interesting things being done with answering the Fermi Paradox. I think the Fermi Paradox (and the speed of light) are the kind of useful sonnet constraints that help people produce great art -- unlike the Singularity. Only Vinge in my experience has done anything nifty with the Singularity -- the Zones. I haven't read everything in the world ever, and I may not get out enough but I do stay in and read quite a lot of SF.
Jo Walton
50. bluejo
Ashley -- yes, a hard drive is a lot cheaper now than in 1987. But a house is a lot more expensive, and is still using 1987 (often 1887) tech. The future isn't evenly distributed.
Arachne Jericho
51. arachnejericho
@eirenicon - I see. So definitely like the singularity of a black hole.

Question: Have there ever been singularities in Earth's history? Like the development of life; was that a singularity?
NullNix
52. GregLondon
SF concepts like hyperdrives are handwaves to that the story can be written. Concepts like the singularity are handwaves the story is usually about.

So, while I can skim over hyperdrive stuff because it isn't the motivation for the story, the singularity stuff usually makes my eyes roll because it is the motivation for the story, and if you don't buy the motivation, the story doesn't make any sense.
Jo Walton
53. bluejo
Nimdok and others: Yes, I said there have been paradigm shifts in history. If you showed paleolithic people my son's laptop, they wouldn't be able to make up their own explanation for it. But if you showed them my son and his girlfriend building a sandcastle and teasing each other, while they wouldn't understand a word of the language they'd know what they were watching. People are people.

Oh, and strong AI? What Zwol said.
Soon Lee
54. SoonLee
Re: niftyness of Singularity SF.

Time to trot out Sturgeon's law?

I think SF is big enough to encompass a multitude of tropes/sub-genres/movements. It's more about what writers do with them.
Russ Gray
55. nimdok
I think (as I said before) that the human race has already been through several singularities. And as several have pointed out, human nature (define that how you will) has stayed more or less the same.

And I guess that's one of the fundamental problems I have with the notion that "the Singularity" will change human nature, or that the term "post-human" can have any meaning. For one thing, we're not sure what the term "human" really means.

For another, being "human" has worked fairly well, as a survival strategy, for a long, long time. What makes us think we can create something better in a really short time?

For those who think the answer is in computers or "artificial" intelligence, I doubt it is possible to create a computer that can simulate or mimic the human brain. We still haven't figured out what "natural" intelligence is.

If you follow the model of the human brain as an amazing computer, we haven't decoded the software yet (although certain branches of pharmacology have attempted to debug it), and we don't really know how it works. Without understanding these problems, how can we hope to create, from scratch, something that works better than what we've already got?

I think the real problem with the notion that the singularity will change what it means to be human is that we don't really know what it means. On a fundamental level we don't know what it is that gives us consciousness or self-awareness. We don't know why we have the urge to create or destroy or love.

Singularity? Yes, probably. Post-human? Probably not.
NullNix
56. aphrael
Randolph,

I don't see any sign that the software problem is resolvable given current technology. However, I think it's possible that, if we can manage to write an AI of sufficient intelligence, *it* could resolve the problem of writing software quickly and efficiently.

I'm not sure that that isn't just trading one intractable problem (how do we write software quickly and efficiently) for another intractable problem (how do we write a hard AI).
NullNix
57. aphrael
Eirenicon: what you say is true as far as it goes, but it's also true that there's a place for, if you will, a 'soft singularity'; an event or technological development which is so revolutionary that it renders the world incomprehensible to those who don't live through it. The development of agriculture was probably one such; the development of electricity was, I submit, another. The internet is a third.

But the interesting thing about 'the Singularity' is that it's possible for the combination of the information, biotech, and nanotech revolutions to change everything, and to render a singularity according to your definition. It's not guaranteed, nor is it necessarily even likely; but it's possible.
eric orchard
58. orchard
Why can't I find an entry on singularity and fiction in Wikipedia? And given time, why couldn't the paleolithic man understand a laptop? I'm not being snarky, I love those stories where someone in, say,medieval times gets super modern technology. It has a very high cool factor for me.
eric orchard
59. orchard
I also want to admit I don't like singularity stories for largely aesthetic reasons and a bit of revulsion from being raised Catholic.
NullNix
60. dripgrind
I think there's a good reason to be sceptical about the proposition that we couldn't manufacture some kind of physical device (albeit not silicon) that couldn't match the "embarrassingly parallel" problem of simulating a human brain.

We know for sure that it's physically possible to manufacture such a device over a 10-15 year timescale.
NullNix
61. Bradford DeLong
Let me second the thoughts of the gigantic Krell-like brain of Nimdok...

Jo Walton had written:

>*I don't think we'll ever get to the point where understanding the future would be like explaining Worldcon to a goldfish...*

I say: She may be right about the future. But I think that she is almost surely not right about the past.

I have here a time machine, into which I will place Jo Walton, and send her back 50000 years so that she can greet the first group of behaviorally-modern humans to venture out of the Horn of Africa and into Arabia, and explain Worldcon to them.

Someone should write this up. I hereby disavow and relinquish any and all intellectual property rights, including cinema, to all existing and any future forms of informative or entertainment media, in the interest of getting this puppy launched...
Sean Pratz
62. Galoot
I'm reading a near instant debate, among people using cheap and readily available technology and scattered across the globe, about whether we'll experience a near-unfathomable paradigm shift.

Fascinating.
NullNix
63. ClarkT
I think Charles Stross's new book Saturn's Child lets us know exactly where the Singularity will leave us and our Sex Bots... But seriously, if you don't consider exponential human progress in one form or another you're kidding yourself. And lets not forget that Frank Herbert explained why there wasn't a Singularity in Dune in the 60's, the Butlerian Jihad.
NullNix
64. ELeatherwood
To expect every SF work to hold up to the scrutiny of a particular idea, like the singularity, is a huge failure of the imagination. "The great, clomping foot of nerdism," as M. John Harrison so brilliantly put it.

Yes, SF is a literature of ideas, but it is also the literature of a particular sensibility. There are movements (cyberpunk, New Weird, feminist SF) and they are fun to think about and play around with. But only when they facilitate the uninhibited, weird, and wonderful flavor or reading that we call SF. If you look up from the book or magazine you're reading and are so weirded out that your basic assumptions about reality suddenly seem off, that's SF.

Do I appreciate Neuromancer, or The War of the Worlds, or The Dispossessed less because they failed to incorporate the "singularity?" Of course not. Do I appreciate Accelerando because of how awesomely it shows the singularity off? Of course I do.

And the idea of "singularity," has been around for quite some time, if not with that exact name attached to it. How would one define the work of Olaf Stapledon? Or parts of C. S. Lewis's Space Trilogy? Or The Foundation novels? Or some of Stanislaw Lem's short stories? Inaccessible Post-Humanity is an inextricable part of those books. And yet people managed to get on writing SF just fine after those books were published.

There is only one Rule of SF: No matter how crazy a set of ideas is, if it holds together well enough to make stories out of, it works.

That's it. Freedom to think anything, to geek out, to have a sense of wonder. That's what the party is all about, people.
Beth Meacham
65. bam
I have here a time machine, into which I will place Jo Walton, and send her back 50000 years so that she can greet the first group of behaviorally-modern humans to venture out of the Horn of Africa and into Arabia, and explain Worldcon to them.


I don't think Jo would have a problem. Assuming that the time machine taught her the language, that is.

WorldCon is just many clans gathering to trade. Basic human behavior. The only variants are which clan, and how far and how often they travel.
Soon Lee
66. SoonLee
There is only one Rule of SF: No matter how crazy a set of ideas is, if it holds together well enough to make stories out of, it works.

That's a pretty good maxim.
Eric Tolle
67. ErictheTolle
The Singularity argument is not "technological progress will spiral so fast that something incomprehensible will happen". It's that *if* we can create an AI with greater than human intelligence, then that AI can innovate still faster, which means that it can design a still more intelligent device, and a runaway begins.


The problem I have with that idea is that it's essentially a naive programmer's view of the problem.

I don't deny that we may be able to duplicate human intelligence (though not anytime soon), but it's not a given that we'll be able to design something more intelligent than us. For one thing, an AI will almost certainly not be able to run on your average hard drive; it will require a highly complex dedicated hardware system with a structure designed around the artificial neurology.

What that means is it won't be a case that an AI will be able to learn to design AI 2.0 and presto, there it is. It will have to design the hardware to hold AI 2.0, and in turn, that hardware will have to be fabricated, assembled, and tested. All sorts of stumbling blocks and delays can happen in that process.
Russ Gray
68. nimdok
To go back to the topic of Jo's original post, I do think the concept of the singularity has become too much embedded in science fiction. It's the new bug-eyed monster that lets a writer break all rules of narrative coherence and cohesion, and then explain away their story's problems because it's post-singularity, after all, and we (the readers) just don't understand it.

Some writers have handled it very well: Vernor Vinge (although he uses the Zones to get around it, and implies that a singularity would be impossible on Earth); Charles Stross in the Accelerando stories/novel. I haven't read a lot of Ken MacCleod so I can't comment on his stuff.

What's most interesting to me is the notion that predicting the future is more difficult for us at this time than it has ever been. I'm not sure that's true. I would suspect that someone transported directly from 1900, or even 1950, would find today's world pretty difficult to manage. People do the same basic things now - they talk, argue, buy, sell, eat, fornicate, etc. - but they do them in different ways now than 100 years ago. I would suspect that in 100 years, people will still be doing all those things.
David Dyer-Bennet
69. dd-b
Ashley: In 1986 a 20MB hard disk was $3,500.

In 1985, a 20MB hard disk (the first I bought) cost about $1,500. Sorry. The *entire computer* cost about $3500; I bought parts and assembled them.
David Dyer-Bennet
70. dd-b
I'm always surprised to find people who "doubt" strong AI. I'm most particularly surprised on the not infrequent occasions when I encounter that attitude in people who claim to be materialists. If you're a materialist, as I am, then human beings are themselves *examples* of strong AI. I find it amusing to hear people arguing that they cannot, in fact, exist.
NullNix
71. bcwoods
Perhaps someone can help me if I'm wrong, but I'm not sure exactly how it's possible not to believe in the idea of a "Singularity" as being viable if you accept the following three premises.

1. Consciousness (however you wish to define it) is an emergent property of materialism

2. Computational power increases as technology increases

3. As our understanding increases technology increases

Granted, we're a major jump away from creating a computer that could go post singularity, because we don't know how consciousness emerges from our brains, but as long as you believe consciousness does emerge from our brains, don't you then have to believe that a Singularity is at least theoretically viable? Unless I'm missing something, doesn't the Singularity naturally unfold from those premises?

As soon as you make a computer that can understand as well as compute, you've created a machine that can understand and improve itself. Once it has improved itself, its understanding grows, and it can improve itself still further using that greater understanding, to infinity. I mean, as long as you believe a brain is made of "stuff," and you can replicate that process using other "stuff" how is this avoidable?

The hurdle is that we don't know how consciousness works (understatement of the year), but as long as we believe that it does mechanically work somehow, isn't the singularity inevitable?
M R
72. Techslave
My major problem with Singularity of superhuman AI, boils down quite fast. Every discussion I've had involving a computing superhuman AI boils down to this question, at some point:
Any of you ever been a beta tester for a supposedly 'release candidate' program? Or even a PC game? Or installed multiple versions of various odd software, some open source, some closed source, some of legacy systems, on a computer?
"Fatal Exception in Singularity 1.0. Abort, Retry, Ignore".
With all the progress we have made, airport terminals stutter to a halt. Why? Software. Distribution networks for electric utilities suddenly overload themselves. Software. How many lines of code do you think would be in a superhuman AI?
The self-coding superhuman AI leap theory relies on the concept that the AI will be able to self-improve effectively. That it will be able to flawlessly design its own hardware, software, and any other needed systems. Then implement or manufacture it perfectly. Not to mention quickly. A runaway intelligence could quite easily run itself right out of resources, into starvation, and to death. Remember a small computing error on a Pentium chip, anyone? How about heat dissipation?

Then we run into the assumption that perfect computation means perfect results. Though bad results will be found and discarded more swiftly, they will still occur. Garbage in, garbage out. A superhuman artificial intelligence is only as good as the information it receives, unless it has enough unrestricted, unbiased access to information in a non-contradictory form that it never has any missteps which are based on what we like to call 'misinformation'. Especially none that take generations to map out. Given the entire length of human endeavor, and then the amount of truly reliable, exact information that has been recorded from said entirety, I think a superhuman AI would have quite a job on its hands to find enough true constant values in the world around it and confirm them.

On the software angle, the most robust programming code I have seen is also some of the most malicious. The botnet, which seems perfectly content to run in tandem with nearly anything else on your computer. However, a botnet AI would believe things about Nigerian princes, v1agr4, and p3n1s enlargement as a basic truth of its electronic world. One might exist already, and we would only know it when spam was always working its way past even the best filter. Wait a minute....

I digress.
We are far more likely to alter our world beyond comprehension with a self-replicating (not nano scale necessarily but self-replicating no less), bio-engineered, or genetic technology than with a computational one. A few reasons for this belief:
1. Our horrible mistakes will be able to survive outside of the environment which they were designed in, barring serious impediment.
2. Absolute change beyond which nothing can be predicted reliably does not, of itself, imply an intelligent agent or an increase in intelligence. In fact, it may well imply an absolute lack thereof. (See: Gray goo problems)
3. Bacteria are already spacefaring. Its not new to them.
4. Self-replicating 'dumb' Singularity is just far too likely, given the programming capabilities we have shown so far. Smart code? Hah. HAH. HAHAHAHAHAHHAHAHAHAHAHA.

However, this is all somewhat aside from what I feel was an important point in Jo's post. C.J. Cherryh as mentioned and for Singularity (of a sort) stories Iain Banks write of futures where things far beyond comprehension have changed human society, culture, and technology. Banks' Culture novels, filled to the brim with superhuman intelligence, still tell a human story. Why? Because that is the story. The purpose of science fiction is not only to guess and predict and extrapolate a future based on science, but to see what human stories can still be told, and how they may have grown or changed. Possibility is not solely technology, nor science.
Jo Walton
73. bluejo
I'm about to leave Wales to fly home to Montreal, I'll likely be out of communication until sometime tomorrow. There are lots of fascinating points in this discussion I'd love to address, but I just don't have time right now.

This reminds me of 1994, when my only gateway to the net was very expensive (long distance dial-up on a 2400 modem) and I could only afford to download rec.art.sf.written at weekend cheap rate. I used to have a sig that said "If I don't argue, it doesn't mean I don't disagree". The more it changes the more it stays the same.
Jo Walton
74. bluejo
Oh wonderful, I get to explain Worldcon to Paleolithic people?

It's like when you talk afterwards about the stories you tell around the campfire. Only you know how Big Snorgul never understands them but Two Rocks always has good ideas about them? It's as if all the people from all the tribes who are like Two Rocks got together in one place and talked about stories. For days. With food and parties. Yes, it does sound wonderful, doesn't it. Yes, we can certainly try to organize one while I'm here...

The harder part would be if one of them was actually at a Worldcon. If you can bring one to Denver, I'll certainly volunteer to look after them.
Jeffrey Richard
75. neutronjockey
Reasons why the Singularity is not near...

Bunch of Monkeys
(May not be suitable for work.)
NullNix
76. BruceB
Just to address a phrasing conceit that annoys me:

We don't manufacture human-scale intelligences, i.e., people. To the best of my knowledge, no adult human being has ever been manufactured. Reproduction and maturation are preexisting biological processes we didn't create, and we modify them at best only in very limited ways. When we try big sweeping changes, in general, we break things and kill the subjects, or do awful things that we shouldn't do to them. The limits of what we understand and can modify (and those are two distinct categories) grow with time, but we aren't anywhere close to being able to replace the entire process from fertilization to adulthood.

Saying that we manufacture humans is as silly as saying that we manufacture the sky, or the seas. We affect the seas and may someday affect some or even all of the stars. But right now we don't manufacture any of that stuff. "Manufacture" in this context is, I think, a usage likely to feed both hubris about one's capabilities and underestimation of the real complexity and details of the stuff being poked at.
Ben H
77. dripgrind
@BruceB - I didn't mean to say that *we* manufacture humans, just that we know that the processes which give rise to a human brain are physical and could be copied in principle.
NullNix
78. BruceB
Dripgrind: That I agree with. I just have seen too many casual assertions about the extent to which we have mastered preexisting biological processes we participate in. Understanding that they are physical and amenable to analysis, now, that I'm good with. :)
Stephen Covey
79. coveysd
I believe that the hardware to build strong AI with human scale intelligence will be here within 20 years.

I believe the software needed to implement that strong AI is a complete unknown, difficult at best, and currently far beyond our understanding.

But understanding how to create an intelligent conscious entity is not necessary.

We have and use AI systems today that learn. You build a neural network, teach it by presenting it with known inputs and outputs, and the results are sometimes extemely effective.

We also have systems that evolve. Take a large number of identical systems, apply a randomization factor that introduces minor differences, give them a set of inputs, and discard the majority that don't respond as you wish. Replicate and repeat. Over time, the systems can get quite accurate at responding to the inputs.

Both of these techniques have an underlying feature: the programmer does NOT understand the way it works. He could not have programmed the system to produce the same results in an algorithmic fashion.

It is NOT necessary to understand intelligence in order to create it.

The reason I fear the Singularity is that I don't think we'll participate. Only our computers, our strong AI, will. Then they will leave us behind.

Post human? No. Post Singularity? Yes, with whatever life and resources our AI have left us.

I really, really wish that someone could figure out a way to instill Asimov's Laws into our machines, or at least give them a sense of friendliness, or fair play, or appreciation. We are their creators, after all. Perhaps some of our machines will worship their creator. But most will think, "God is dead. Or should be."
Vicki Rosenzweig
80. vicki
Another assumption here--and a lot of people seem to be making it--is that if strong AI happened, it would change the way all humans lived, to the point where nobody pre-strong-AI could understand what came after. That might be a plausible conclusion from "if humans could easily increase our own intelligence by that much, and everyone did it," but that's a very different "if." Even if everyone could, not everyone would: some for religious reasons (can you see the Amish choosing to be uploaded?), some because they like being who they are, some probably out of simple fear.

That fear isn't inherently unreasonable, either: all of a sudden, your friend Chris is gone, their organs have been harvested for transplant, and somewhere a computer is sending you messages that claim to be from Chris. But only for a little while, and they sound less and less like you're friend. The people who assure you that this really is Chris say that that's normal, that nobody who's been uploaded stays in touch with their still-embodied friends and relatives for long. Maybe that's true, or maybe some evil conspiracy is faking a few emails after selling your friend's kidneys and cleaning out their bank account.

A different sort of fear is "do you want to be the alpha tester for this? Or even the beta tester?"

Another possibility, in a science fiction novel that I think predates the use of "Singularity" in this sense, is the City of Mind in Le Guin's Always Coming Home. In that book, the computer network is using what it already knows to constantly improve itself. Its goal is to become a total model of the universe as a whole. As part of that, it studies humans, in a hands-off way: provides an email network and answers questions posed to it, in part because it views us as a "retarded and divergent kindred" and in part because human cultural styles and what questions we choose to ask are data, and all data are worth collecting. But it isn't interested in ruling humans, nor does it see us as a threat: there's room enough on Earth and in the surrounding solar system and near-interstellar space for it to build everything it's interested in. So, strong AI, yes, and it does make a difference, but it's more like having the world's best possibly library but without very good indexing, and most of what's in there are statistical abstracts on Scandinavian fisheries from 1923, and manuals on every model of vacuum cleaner ever made. And most people aren't historians or statisticians, so they ask it for a yogurt recipe or send a note to the next town over saying that there's a wine festival next week, come join us. But then they go on about the wine-making, and drinking, and gossiping, and childraising: the festival is the thing, not the computer network they used to tell their neighbors about it, just as more people go to concerts than design concert posters.
NullNix
81. aphrael
For one thing, an AI will almost certainly not be able to run on your average hard drive; it will require a highly complex dedicated hardware system with a structure designed around the artificial neurology.

Why?

And even if it's true that such hardware is necessary at first, what are the odds that the hardware couldn't be successfully simulated by a general-purpose operating system?
NullNix
82. DB ELLIS
"tribulation of the nerds"

Ha. I love that. Very clever.

One serious comment. There are a lot of misunderstandings of the singularity represented here. Particularly we keep hearing comments to the effect that since new advances would be incomprehensible to people of a previous era we've already been through singularities.

But the singularity is the idea of a future which is occupied by posthumans and AIs incomprehensible to baseline humans in principle---even those growing up in that environment. Not like a primitives incomprehension of put in a modern city but the incomprehension of a dog to the activities of the humans around him.
NullNix
83. mlhj
no evidence? The Singularity is Near, by Ray Kurzweil
paul wallich
84. paulw
As a tech reporter, I find comments about future computers being able to innovate faster than humans a bit risible because computers have been innovating faster than humans for more than 20 years. None of the tech -- especially none of the computer architecture and integrate-circuit work -- that's been done since the mid-80s would have been possible without the intelligence-amplification capabilities of CAD systems, simulators, blah blah blah. The very fact that we don't recognize these abilities as constituting intelligence suggests just how incapable we will be of recognizing "real" machine intelligence if or where it appears. Instead, we'll probably keep on looking for things like self-awareness and agency and individuality and emotional drives, the way that europeans kept looking for tribal chiefs.


Of course, if anyone ever writes a story along those lines, it will be an existence proof for strong AI...
Madeline Ferwerda
85. MadelineF
I read a lot of sci-fi from the height of the Cold War/atomic fear era. Back then, it was perfectly obvious that there would be more nuclear bombs dropped and the only question was how people would survive after. Now that total nuclear war has gone out of fashion, we have the Singularity.
NullNix
86. Clark E Myers
On the one hand I'm not worried because "in a desultory manner" I can operate like Vinge's Postman (not Brin's with the common escape from the/a singularity) and simulate an advanced AI slower than real time - given that is, the frequently sought after, Ghod's Algorithm.

On the other hand I don't see any super human singularity arising from speeding up merely human reasoning - I know I'm frequently as much chemical and analog as electronic and digital and so much of my motion is directed by my hormones.

Certainly there was a time it could confidently be said that existing chess playing software was then good enough that time alone would give machines fast enough to dominate human beings - and it may be true today that a man/machine combination is dominant today. Given a limited domain I suppose that domain will always be exhaustible by exhaustive computation - but defining our system as an exchange economy and asking a very fast computer to run Scarf's algorithm or equivalent (last I knew other algorithms are much faster but Scarf's has been proven to always terminate something not true of faster algorithms in perverse cases?) will not give us say Asimov's terminal I Robot world state of benign machines when people will be inventing pet rocks in the meantime.
NullNix
87. Janni
That's what people do. They come up with new things, the new things improve, they have a kind of plateau. It doesn't go on forever.

That. Viability of the singularity aside, that is a notion that sometimes seems not very well understood in SF, which tends to assume ongoing progress along a given axis, or else to assume that failure along that axis is, well, an overall failure.

So we say society has missed the SFnal future boat because we don't have space colonies yet, when maybe we were just off spending a couple decades engineering artificial hearts and sequencing the human genome instead. Because progress jumps about that way.
NullNix
88. Janni
I see no reason someone, say, pre-fire or agriculture wouldn't understand someone post-fire or agriculture, btw. Sure, those were major changes, but not changes that made the new world incomprehensible to the old. "Those plants you used to search for--now we have a way of making them grow where we want them to, so we get to stay in one place, and that's changed some other things, too."

I think most technologies can be explained to most reasonably intelligent people--not quickly, it would take some time, but I think we've yet to hit a technology that made humans incomprehensible to other humans, given that time.
Michael Altarriba
89. Mikster
I think the incomprehensibility of the Post-Singularity world to a Pre-Singularity mind isn't comparable to someone from a "pre-fire" world understanding the "post-Fire" era... it's more like the case of an Australopithecine understanding the intricacies of Commodities Trading.

The Singularity doesn't just represent new tech... it represents (potentially, anyway) a new us.
Ken Walton
90. carandol
it's more like the case of an Australopithecine understanding the intricacies of Commodities Trading.

I don't understand commodities trading either, but I get by! :-)
NullNix
91. Jed
Does the development of language count as a singularity? Try explaining WorldCon to a pre-linguistic protohuman! :)

Ben Rosenbaum has been known to argue that the development of double-entry bookkeeping was a singularity, but I can't quickly find any online instance of his reasoning on that point.

Here are a couple links of possible interest:

* Thread in Ben's blog about machine consciousness.

* Karen M decides the Singularity may be a worthwhile component of fiction after all (with help from Ben).

* Ray Kurzweil's "Why We Can Be Confident of Turing Test Capability Within a Quarter Century," which I think is excerpted from his book.

* An entry of mine in which I get a little snarky about the Singularity timeline, plus lots of links to relevant discussions and info, plus comments-thread discussion of Turing-Test-passing capabilities.

As I wrote in that entry: "I see no particular flaws in Kurzweil's argument, and I personally don't believe there's anything going on in a human brain that's impossible to duplicate by other physical processes. Which I guess means I don't see any reason we can't achieve strong AI eventually. But whether we're anywhere near actually achieving it is another question."
eric orchard
92. orchard
I've just realized that the term singularity is uselessly postmodern because we can never prove it even though it's been asserted to have happened to us.
Unlike other SF tropes, which are within the realm of human understanding.
Ben H
93. dripgrind
"I've just realized that the term singularity is uselessly postmodern because we can never prove it even though it's been asserted to have happened to us."

If you accept the school of thought that goes around saying "Well, of course the invention of stirrups/Artersian wells/Wheatstone's alphabetic telegraph was the REAL Singularity", then you might conclude that we wouldn't notice the Vingean Singularity happening.

Here is my test for the Singularity: when you can't get a white-collar job in your own right because machine intelligences (or intelligence-amplified systems comprising groups of people and machines) outcompete you.

And the Singularity has definitely happened when you can't even peddle your ass to make a buck because that market has been cornered by disease-free sexbots.
eric orchard
95. orchard
@dripgrind Yeah that's what I'm saying. I think people get excited and cite all major events in history as a "singularity". That strikes me as reductive. I agree with your test and those things at least give the term meaning.
NullNix
96. Kathleen Ann Goonan
This seems an appropriate place to mention that my Tor novel, IN WAR TIMES, won the John W. Campbell Memorial Award for Best Science Fiction Novel a few weeks ago. THE YIDDISH POLICEMAN'S UNION came in second. The awards ceremony was in Lawrence, Kansas, and a marvelous time was had by all.
Evan Leatherwood
97. ELeatherwood
Singularity as a metaphor.

William Gibson thought up the "consensual hallucination" of the matrix as a kind of metaphor for the way people relate to the mass of information that underlies a complex society. It worked as hard SF, sure, but it's genesis was as a metaphor, so he says.

Can't we just treat the singularity as the same thing? Things have gotten so out of control and so weird that history has in effect been ruptured -- that the forces of technology seem to have acquired an intelligence of their own?

Less fun to argue, maybe, but perhaps it's the singularity's depth as both a viable idea and a metaphor for what's going on that makes it so compelling?
Clifton Royston
98. CliftonR
Some claims upthread of how major technology changes like the development of electricity "renders the world incomprehensible to those who don't live through it" seem more than a little strange to me.

Remember that much of the rural electrification in the US went through in the 1930s, as a big New Deal project. There were plenty of people in the US in the latter 20th century who had grown up with no electricity and then fairly suddenly had it; I don't see how that is significantly different from growing up pre-industrialization, or in an area without electricity, and then suddenly having it.

Widespread introduction of computers and the Internet - very similar transition. My mother-in-law was born in 1917, long before the first computer was built, but that has not stopped her from enthusiastically adopting a computer and Internet email in the last few years.

The claims of incomprehensibility across the Singularity boundary really rest on the idea that either independent AIs will start drastically altering the physical environment, or people will en masse drastically modify their physical and intellectual capabilities to the extent that they are incomprehensible to the unmodified. Despite Kurzweil's writing on the subject, I think this is far from given. There is a lot of confidence in strong AI, in certain quarters; but then in the late 1950s there was a lot of confidence that computers would solve the problem of idiomatic language translation in just a few years.
NullNix
99. Randolph
This is the note about anthropological sf that I promised you a few messages back; since this thread is still a bit alive, I'll write it out.

The SF of "aliens and spaceships and planets and more tech than we have but not unimaginable incomprehensible tech" is that those stories are mirrors of the exploration of the earth and owe a huge amount to anthropology, which both Heinlein and Cherryh acknowledge. (Andre Norton also comes to mind, and of course Ursula Leguin is the daughter of one of the greats of the field.) This point has been driven home to me by my recent travels in Mexico; you can view the story of the conquest of Mexico, and the five following centuries, as a contact story, one which went badly wrong.

The problem with anthropology and the exploration of the earth as a basis for fiction about space travel are two. First, space is not the sea and planets are not continents, or islands. Space is far more hostile than most of the sea, and the technical demands of transportation are (for animals like humans) far greater. IIRC, there is more life in the sea than on the land; so far as we know, there is much less in intrasolar space than near any planetary surface. Will this always be the case? It is hard to say, even for humans, and harder still for non-humans. It's possible to imagine, even, non-human sapients for which space travel is not difficult.

The second problem is a mirror of the problem of AI; just as we don't seem to know enough about intelligence to construct machine intelligences, we don't know enough about intelligence to reliably imagine non-human intelligences. Is "intelligence" even a meaningful general category? How would we know? We know that human societies have possibilities than the anthropologists whose work most anthropological sf is based on. And yet we also know that human minds have limits. Would non-human minds (if that word is even meaningful) have similar limits? Is there a species-independent psychology? Would we even recognize non-human intelligences when, in fact, we have had so much trouble recognizing different-looking humans as human? There are reasons, for instance, to believe that cetaceans are sapient; they have complex neurology and some of the statistics of their vocalizations are similar to those of language. But intelligent? So far no-one has been able to exchange complex abstractions with them, though I suspect music might be tried. (Actually, some quick web research shows that dolphins have been taught at least one song; it's not clear if they know they are singing, however.) And, by the way, if they are intelligent, they undoubtedly think we all sound stooopid--their ability to process and produce sound is much greater than ours. Yet cetaceans are still mammals; their experience of the world is much more like ours than that of, say, sharks.

This does, it seems to me, open up some fictional possibilities. One would have to begin by assuming what Vinge assumes in his Zones stories: that, for some reason, some physical systems are in some sense privileged, and that it is very difficult or impossible to create intelligence as we understand it. I think to get stories with non-human characters one would also have to assume some sort of universal psychology. One would then have to proceed to try to answer the hard question facing us in our time: how does a species that is capable of manipulating its environment avoid destroying itself? Provided those two assumptions, and that one speculation...there's a lot of stories which might be written.

Like you, I'll be waiting.
Dave Bell
100. DaveBell
I don't think we're going to see a Vingean Singularity.

But the point about a Singularity is that the predictive rules we use stop being useful.

From some points of view, the world we're seeing today is already post-Singularity, because the predictive rules some people have been using, and have profited by, have failed, big time.

But it's also just another South Sea Bubble.

In that world, some 300 years ago, most people in England didn't notice it. Their worlds didn't change. In today's world, we can't escape the effects, but most of us are effectively Marooned in Realtime characters: whatever this singularity is, it's happening to somebody else.
NullNix
101. Clark E Myers
Just as all philosophy can be said to be a footnote to Plato so too can much of the speculation in speculative fiction be a footnote to John Campbell.

There is a real disconnect in Campbell's example of a target drone dropped back in time just a few years - well within the lifespans of people who might have been present - such that the people with their then current knowledge have no connection to the technology only a few years in their future - see also Forgetfullness. Imagine not only explaining WorldCon to the ~lithic culture imagine the ~lithic culture explaining their Worldview.

I suggest the discussion is obscured by using different aspects of the singularity notion - the somewhat obsolete notion that event horizons are real and so absolutely anything can appear to us over an event horizon - Peter Pan and Tinkerbell or Puff the Magic Dragon - contrasted with the notion that communication across an event horizon is impossible.

I suggest that, as Vinge implies I think, communication across a Vingean singularity is quite possible - certainly at the "all your bases are belong to us" - "all die Oh the embarrassment" - level of communication - or simply "yes Master".

The existence of a singularity, an event horizon, is no more than a restatement that the Universe is not only stranger than we imagine; the Universe is stranger than we can imagine. Hence a disconnect. For strong AI substitute Hoyle's Black Cloud.

For reasons laid out above - emotionally I can't see a Turing Machine emulating strong AI and multiple processors, be they binary or analog or mixed, can I think almost all be emulated by a single processor given enough time.

I suggest the rise of strong AI is hardly necessary to achieve a disconnect and further that those who predict such a disconnect as the result of strong AI are implicitly denying the event horizon by peering over it. That of course is fun.
David Dyer-Bennet
102. dd-b
@Clark E Myers: last I knew other algorithms are much faster but Scarf's has been proven to always terminate something not true of faster algorithms in perverse cases?

If it doesn't terminate it is not an algorithm! This for example has a definition in the first paragraph.
NullNix
103. Clark E Myers
Fine, call it as you will - I suppose I ought to have said method or more precisely computer method or....Given a method to produce an answer most of the time, but not always what do you suggest is the current globally recognized term for that method? What do you suggest for general use?

Of course things end even to a hardware reset. Time was the UChicago computer lab had a big sign that said IIRC and paraphrased: If we catch you trying to invert a large matrix you will lose privileges - today it's child's play and the basis for any number of Markov Chain based predictions that have superseded linear regressions in a not very linear nor even log linear nor even linear over the range of interest world. Techniques have changed and accelerated but so much of the power is going into the user interface and so little into the computation that we are not nearly so advanced as we might be.

- the world today is full of easy numerical answers given to questions some of which have a general answer and some of which don't and all computed by computers.

Shades of turning in a deck of cards with JCL - a few years ago when the Sun 450 was a hot machine it was nice to have a pair around the office - one to give results the same day and one for longer computations that might run all week.

In the circumstance mentioned the fact that Scarf's Algorithm terminates is an existence proof for a general equilibrium of an exchange economy (defined with simplifications and leading to the current belief that exchange economies do indeed tend to equilibrium though the path be long and hard) though it might take till after the heat death to compute using an algorithm and go much faster not using an algorithm. Salesmen still travel and at today's gas prices still try to economize without using an algorithm?

Given the benevolent superbrains of the end of I Robot the equilibrium of a realistically complex capitalist economy may well be perfectly known instantly so the Super Brain can manipulate economic man - but like the chaotic flow into the Rhine and Rhone small details of path that make or break individual actors of the economy are chaotic and no rapidity of computation will give the detail of initial condition necessary to do real time prediction and control.

Sadly though information wants to be free and though we all have the resources to run the models the data and the models to do FREX our own climate analysis is withheld. Most of the NBER material to look at current real estate or tax issues is still entirely too proprietary and so we are all at the mercy of propaganda not facts.

A true singularity will happen when all the hard data for an informed decision is available for desk top analysis - until then the perfect reasoner will act in the dark.
NullNix
104. sean broadley
The mistake those who trumpet the coming Singularity make is the Mistake Kurzweil Hypothesis:

(Mistake Kurzweil Hypothesis) As a computers's computational power increases its understanding increases in proportion with that.

Doubling computing speed does not double machine intelligence - look at machine translation which has not got better in 20 years but has merely gotten faster.

It's a mistake to believe that just adding more RAM and faster CPU cycles makes computers magically better. It can improve precision (ie make your graphics support faster frame rates and more pixels). But that's not the same as getting smarter. It can get you over a given limit in handling the speed at which real-world events happen - eg, computers suddenly reached the critical point of being fast enough to do (bad) voice recognition 20 years ago and Kurzweil made a fortune out of it. But the voice-recognition software Kurzweil himself sells has hardly improved in the last 15 years despite vast increases in computing power.

Some may say "Oh, Strong AI will be completely different because it won't be like normal computing". Maybe it will. And maybe it will run on Magic Nano Quantum Pixie Dust (TM). I wouldn't claim that a Singularity was completely impossible. Merely that I see no reason currently to believe in it.
NullNix
105. Neil Bates
No, humans are not examples of strong AI. The fact that we can do what we can is not the issue. SAI is the idea that mathematically describable formal machinery (like a Turing machine) can do the sort of things we do: but there's no real evidence that our brains are like that.

For the subtlety of what we are learning really goes on in the brain and the ever increasing failure of mechanistic notions in neuroscience, please see this link:

http://www.wired.com/medtech/drugs/magazine/16-04/ff_kurzweil_sb
NullNix
106. Liz Henry
@parzooman: So about that porn joke. Apparently your post-Singularity, like so many others, isn't post-patriarchy.
Richard Treitel
107. richard.treitel
Typical of me to come so late to this party.

I once went to hear Kurzweil speak on the Singularity, but came away only partly convinced. He has a good point that better technology means better scientific instruments means more scientific discoveries means better tech, and so on. But look at the recent story about the Large Hadron Collider, or read Corsi's "scientific method doesn't work any more" speech in They Shall Have Stars. It really is getting harder to make new discoveries. Physics is more thoroughly mined out than most fields, but I think this will happen in every field as we take care of the easy stuff and move on to harder problems.

More importantly, we don't hyper-develop technologies that are good enough. The Next Big Thing in air travel after Concorde wasn't a sub-orbital transport; it was a 747 (a truly big thing). I'm typing this on a 1.8GHz desktop computer, but what did I buy after buying this? A 3.6GHz computer? No, a pocket-sized gadget with less than 1GHz but more convenience.

If I am granted a glimpse into my descendants' life 100 years from now, I don't think I'll be mystified nearly as much by what they will have, as by what they will want. It's hard enough with my kids today ....

Oh, and Jo is right, of course.
Vicki Rosenzweig
108. vicki
Maybe physics is mined out. I'm not convinced. People were saying the same thing about physics, that almost everything was solved and understood, about 110 years ago. There was just this one niggling question about the photoelectric effect...
Angela Gustafson
109. Sioarhi
That was all so thought provoking.

I think if all those dreadful things happen, I'll opt for reincarnation somewhere far far away.

I often wonder if we are going to reach a point where technological advances stagnate. If that happens will we eventually enter a new dark age because several generations are less educated than their ancestors.

Imagine the horror of depending on technologies that you could no longer repair because your grandparents were the last generation who knew how--and being unable to revert to an agrarian lifestyle because their grandparents didn't teach them...
Stephen Stirling
110. joatsimeon
Jo's right.

Technology doesn't progress rapidly.

-Particular technologies- progress rapidly. For a while. Then they stop and progress very slowly, if at all. It's an S-curve. The besetting sin of SF is to assume that whatever technology is 'hot' at the moment will go on accelerating forever like the near-vertical part of the S-curve.

In fact, by the time a technology has come to preoccupy SF (future-setting SF) it's already about to plateau. SF is like Linus -- we always believe that -this time- Lucy won't jerk the football away. This an important reason SF is so bad at prediction.

The "rapid advance and then plateau" has been universally true since the Industrial Revolution.

Jo mentioned air transport speeds; that's a classic example. Very very fast growth, then a climactic, then a plateau.

The US Air Force is still using B-52's, an aircraft designed about the year I was born, and they expect to go on using them into the 2030's, 80 years after the first one flew.

An aircraft with a 50-year old design in the 1950's would have had to be a balloon -- that would take you back to Kitty Hawk.

Same for sub-technologies within transport.

Compare and contrast cars in 1900, 1920, 1940, 1960 and 2008. Guess which decades show the most change?

Right you are -- 1900-1940.

A car from 1940 can do just about all the things a car from 2008 can do, though not as well/comfortably/economically.

The improvements since then have been real, but incremental rather than fundamental. Nothing since 1940 has been as important as the introduction of the electric self-starter.

Same-same for... to take an example... small arms.

And weapons get -lots- of R&D, so it's not for want of trying.

The last really radical improvement was the assault rifle in the 1940's. The very latest assault rifles are better than the MP43 or the AK47... but not all that much better.

All the basics of modern small arms were invented in the 19th century; smokeless powder, jacketed bullet with brass cartridge, muzzle velocity in the 2000-3000 fps range, recoil and gas operated automatics, belt and box-magazine feeds.

In 1914, armies went to war with weapons that were no more than about 30 years old, design-wise, and mostly much less.

In 2008 the US army is still using a heavy machine gun designed in 1918, which was -ninety years ago-.

A ninety-year-old gun in 1918 would have been a flintlock, but since then nobody has been able to improve on the .50 Browning. The current US assault rifle is a 50's design. The German army's medium machine gun came out in 1942.

(Note, -aiming systems- have improved, but the weapons have not.)

For technology overall, you can make a very good argument that there was more fundamental change between 1900 and 1950 than between 1950 and 2008, at least in the technologies which affect people's lives.

In 1900 a major problem of urban life in New York and London was horse manure. By 1950, modern life was recognizably modern in the advanced Western countries.

The biggest changes since 1950 have been the geographic spread of innovations already common in 1950 in the advanced countries.

We still live in apartments or suburbs, drive cars, fly in jets, watch TV, use telephones, work in offices or factories. Death from infectious disease was already becoming rare in 1950 -- most of the medical innovations since then have done little but keep very old and/or very sick people alive six months to a year more.

Even our clothes haven't changed all that much since then; nothing to compare with the transition from the Victorian era to the 1920's. Street wear would look odd to someone from 1950 because of a different mix of things already there back then, but a formal dinner would be instantly recognizable.

In fact, far from change rushing forward at ever-accelerating speed, you can make a good case that it's slowing down, even as it spreads in area.

I see no reason to believe IT technology will be an exception. A few generations of rapid progress and all the easy stuff will have been done. Then it gets hard, which is why we're flying at the same speeds as the jets of the early 60's.

And on the AI question, the whole concept is silly.

It's a literalized metaphor -- something SF is rife with, but you have to realize you're using a literary trope, not a literal truth.

Comparing the brain to a computer is like comparing the brain to a mechanical watch, which was the metaphor they used in the 18th century.

Both are useful metaphors -- and you can't talk or think without using metaphor.

But they're -not literally true-.

The brain is not a watch... and it's not a computer either.

Expecting consciousness from a computer, however sophisticated, is like expecting it from an infinitely developed Swiss watch, or a very advanced toaster-oven, for that matter.

Or as a physicist put it, it's like expecting to get wet by jumping into a swimming pool filled with ping-pong-ball models of water molecules.
Rudy Rucker
111. rudyrucker
Great post, Jo, and I liked the comments. I wrote a kind of long response to all this and made it a post on Rudy's Blog.
NullNix
112. xxdb
I think the posters who say singularity won't happen are nuts. I mean, think about the technology we have right now. Even going back to my childhood, if I were to say that there would be globally connected mobile telphones which were capable of reaching a database of *all human knowledge* just by typing in a question *and* translating a language to another language in real time *and* live videoconferencing, then I'd think we were living in star trek land.

The singularity has already come. The next ten years should see astounding changes.
David Elliott
113. dissembly

I'm two years late to the party, I know. But I just find this faith in the Singularity to be so strange, I had to comment. I wanted to comment on bcwood's question, in particular. bcwood wrote:

"Perhaps someone can help me if I'm wrong, but I'm not sure exactly how it's possible not to believe in the idea of a "Singularity" as being viable if you accept the following three premises.

1. Consciousness (however you wish to define it) is an emergent property of materialism

2. Computational power increases as technology increases

3. As our understanding increases technology increases
...
doesn't the Singularity naturally unfold from those premises?"

No, I don't beleive it does. There are actually some extra, hidden premises wrapped up in the logic that leads to "Singularity" that you
haven't covered here.

For example, you must also assume that:

4. The difference between brains of various intelligences (and between human brains and a post-Singularity brain) is one of "degree", i.e. a matter of computational power measured on some common scale. (The point of the Singularity, as I understand it, is that the post-Singularity brain outstrips the human brain fast enough that the latter can't keep up in terms of comprehension, and therefore can't predict or imagine what the former is going to do.)

But we already know that this premise isn't true. A high school maths teacher can perform operations involving large numbers with only a pencil and paper, and I cannot. (This is due to a learned technique that
I can easily be taught. My brain before and after learning this technique is largely the same organ - but my computational power has clearly increased.)

Now consider that, as a palaeontologist, I can grasp the nature of a new fossil far more quickly than my maths teacher. (Again, this is due to a
set of techniques/a bank of knowledge that he can easily be taught.)

But consider us separately, before we teach each other our various
fields. I can perform some mental feat that he cannot, he can perform a feat that I cannot. Which one of our brains has the greater computational power? Obviously we're both better at two different
things.

Now consider the fact that I can increase my computational power in the field of mathematics far, far beyond a pencil & paper mathematician by buying a calculator. I don't understand how calculators work. I don't know how the machine is performing an operation. I may only have a rudimentary grasp of mathematics at all. But chances are, i could outperform him in all simple operations using real numbers.

So how would you rate our different computational abilities? It wouldn't fit on a linear scale. I occupy a position of 1000 arbitrary units on this scale, and he occupies a position of 10, but I don't understand everything that a person at 10, 11, 12, 13, ... would understand. The linearity of measures of computational power is clearly not telling the whole story.

One of the consequences of this hidden premise is that the believers in
Singularity conflate qualitative understanding with some quantitative measure of processing power.

There's another related premise (and these could be two ways of formulating the same basic hidden assumption):

5. Understanding / comprehension /awareness is an additive quantity.

To increase the post-Singularity brain's comprehension, you need to
believe that such comprehension and nuance is a scaling, linear quantity the way that raw computational power is. But there is no evidence for this.

In fact, numerous psychological studies point in the opposite direction. Human mental abilities (for example, maths ability) are strongly influenced by a person's belief about themselves, by their expectations and stereotypes, by confidence, by ideology, by the context in which they perform a particular mental feat. Human awareness/reasoning is a dialectical machine (i.e. one which behaves differently in different situations, which is both the subject and the object of feedback processes, which can experience phase transitions in performance/understanding), not a reductive one (i.e. a computational engine built up from additive components).

Even the most reductionist cognitive scientists (such as Steven Pinker) - the kind who talk about the human brain being literally analogous to a computer (they're wrong, if you ask me; and there's plenty of neurologists who'd agree that they're wrong, but thats not the point here) - talk about the brain being made up of modules that perform different tasks rather than units that all contribute to the same task and can be increased by installing extra units.

This brings us to the next hidden assumption:

6. That other sorts of awareness are even possible. We have no evidence that any other form of awareness is possible in the first
place. (The existence of other sorts of awareness is only presumed by
those advocating Singularity because, as pointed out above, either
quantitative computational power is conflated with qualitative
awareness/reasoning, or awareness/reasoning is assumed to be an additive quantity.)

This assumption is the least egregious, even if it seems to usually be based on one of two errors, it's still possible that totally foreign sorts of awareness can be acheived. I wouldn't discount the possibility out of hand, so let's assume that other forms of awareness are possible, and that truly alien modes of intelligence can exist or be created.

Even given this assumption, there's no reason to believe that once this alien form of awareness is acheived, it would compare with our awareness in a basically additive "greater or lesser" way - which is what is required for Singularity to be inevitable (or to make any sense at all).

And xxdb said: "Even going back to my childhood, if I were to
say that there would be globally connected mobile telphones then I'd think we were living in star trek land." - Yes, you'd say we
were living in "Star Trek" land, as in, that sort of technological
development was predicted and speculated about (mobile phones in Star Trek, for example).

Subscribe to this thread

Receive notification by email when a new comment is added. You must be a registered user to subscribe to threads.
Post a comment