<< Previous Entry Next Entry >>
Journal Entry

Friday, February 24, 2012

More on the Past's Future

So:

Some of the comments and email I got on that blog entry framed the problem in terms of print vs. online magazines. I am partly to blame for this as I did characterize it as "the kind of story that I keep running into -- particularly in the print SF magazines."

But folks, I actually don't think that distinction is all that relevant, for a couple of reasons.

First, the distinction print vs. online magazine is increasingly blurry; magazines that have print runs are increasing their digital sales, and if reading-in-general migrates to the Kindle, magazines of various kinds are just going to migrate with it.

Second, a big reason I see the issue I'm discussing more in print than in online magazines is a self-selection effect. The stories I read online, I read more often than not because someone linked to them or mentioned them in chat or shared them in Facebook. The stories I read in print, I'm reading because I got the magazine in the mail and I'm reading through it sequentially, while sitting on the tram or in the bathtub. The print mags also have to justify print runs of over 10,000 copies, so they need to aggregate several kinds of SF readers into one readership; some online venues can afford to be more specialized. So it's no wonder if the online stories I read appeal to my specific tastes and those of the friends whose word-of-mouth I attend to, while the print stories are often aimed at some other kind of reader.

Third, it's possible that, because relatively antiquated people like myself often prefer reading fiction on paper to reading it on screens, there is some demographic difference between the readers of the print and online genre mags. But it is imprudent to congratulate oneself too swiftly for possessing a virtue as unearned and transient as that of youth.

The question here is whether SF is employing the future merely as a convenient backdrop, or is engaged in actual thinking about the future. But there is somewhat less virtue in managing to write successfully about 2012, if 2012 (and thereabouts) is all you know. We may well find, come 2040, that the authors writing in such quaint organs as "online magazines" are still stuck in 2012, writing stories in which everyone in The Future is twittering and watching reality TV and navigating through forests with their GPS cell phones... or in which the Future is still imagined as an uploaded* extropian Singualrity, or a climate-change-ravaged ecodystopia... in defiance of whatever 2040 has, by then, taught us about its future.


I should also say: I'm aware that, in a certain sense, I'm being a curmudgeon. What's wrong with simply not caring about the future, or the coherence of the world implied by a backstory; what's wrong with telling whatever story you want in whatever backdrop of setting amuses you to tell it in? Why am I being such a scientifictional prude?

In a certain sense, the answer is, as always, "nothing's wrong with it if you pull it off." I like superhero comics, in which the multiple colliding backstories of the characters make a total surrealistic hash of consistent worldbuilding (in fact that might be what I like best about superhero comics). I rather liked "The Time Traveler's Wife", though its interest in time travel is purely as a trope, a plot device unmoored from its speculative origins. Indeed, it worked because it was unmoored: the clumsiest and wince-worthiest part of that book was the brief regrettable excursion into manufacturing a handwavy speculative explanation for the time-hopping ("he's a mutant!") So what bothers me is not actually stories which don't care about science, or don't care about the future, per se... but rather stories which lure me into thinking that they do care, but actually don't. And that line is obviously very personal.

And to some extent I wonder if what's really at issue here, isn't a kind of battle between two different nostalgias: the nostalgia for Old Timey SF as a set of images (servile humanoid robots and dangerous voiceover-AIs, FTL and "galactic empires", starships crewed by humans wearing uniforms and sitting in chairs and looking at screens, blasters, colony worlds, bump-on-the forehead aliens who are essentially stand-ins for wise, ecological, doomed Native Americans or for terrifying Mongol hordes...) and Old Timey SF as a process (thinking about a future that "might really happen, given what we know today", and setting a tale there). Maybe the second -- at least as any kind of privileged "heart of the genre" -- is as old-fashioned as the first...


* You do realize that in 2040, the word "uploaded" is going to sound as old-fashioned as "analog" and "hi-fi" do now?

Posted by benrosen at February 24, 2012 03:24 PM | Up to blog
Comments

Suddenly I'm wondering about two things: (1) editors and (2) the lack of a consensus future. Thinking about say H. Beam Piper (all in the public domain and the prose holds up well; give him a whirl) as an example sixties writer: the Zarathustra of Little Fuzzy is socially and economically Montana or Colorado in the mid-20th-century, but the characters all have videophones (check) and flying cars (check). I think Piper probably put those in unprompted, but I on no real basis suspect a 60s editor would be more likely to ask "where are the videophones? where are the flying cars?"

Posted by: David Moles at February 24, 2012 04:45 PM

Side note: what's missing from Piper, later, is the ICBM. Atomic air-and-space battles happen at visual range.

Posted by: David Moles at February 24, 2012 04:47 PM

I know from others, that there are segments in magazine fiction. Some stories are edgy and modern, some are okay but not really breaking new ground, and some are from the fans that fell in love with the genre 10/30/50 years ago, and want more of the same. At least that seems to be what the editors think.

Posted by: Lise A at February 24, 2012 05:44 PM

When you say "uploaded" is going to sound as old-fashioned in 2040 as "hi-fi" does now, do you mean that you think scanning people's brains into computers will be an obsolete technology by then? Or that a different word will be used to describe the process?

Posted by: Ted at February 24, 2012 09:01 PM

I figured Ben meant that the everyday concepts of "uploading" and "downloading" are only meaningful when network bandwidth and latency are constrained and there's a meaningful separation between client and server. between seamless file synchronization and ubiquitous media streaming they already must be in less currency than they were five or ten years ago.

Posted by: David Moles at February 25, 2012 05:31 PM

What David said is what I was thinking of: "uploading" is something you do when you have an object on a client and moving it to a server is an atomic action which a) you have to specifically, consciously choose and b) will take some time that you're going to notice. It is bound to a specific configuration of systems which by its nature is probably temporary. Not that I'm necessarily pulling for a cloud-based technical future; I just suspect the word is an artifact of a moment in time. We don't "rewind" CDs, either; "rewinding" was the product of music being on a linear medium that took considerable time to shift.

And I also strongly suspect that the "scanning people's brains into computers" is a trope which is going to mark a story very firmly as coming from a specific time period, much as "huddled group of survivors facing mutants and predators after a global nuclear war" marks a story as having been written 1945-1975. Could you write such a story today? You could, but it would be something of an homage. Is that because nuclear war is impossible? No; the current multipolar detente could collapse, a newly powerful China and a declining US could end up in a Cold War style nuclear showdown by the 2050s, there's no reason your Gamma World campaign couldn't be set then. But our preoccupations have shifted, and with them our handwavery tolerances. In 1962 probably any scientist knew that "dangerous mutant hordes" was an absurdity, but the popular imagination was in a place where it happily accepted the absurdity as a trope, the way we now embrace zombies.

Is the idea of simulating human cognition with silicon, or of increasingly intimate cybernetics, absurd? No, not at all. But the trope of "uploading your brain into a computer" and everything that trope comes with -- and I'll be super smart 'cause I'll be in a computer, and I will either transcend physicality or have a lovingly simulated artificial world -- etc. -- is, not long from now, going to seem a hilariously dated image for whatever is actually going to happen. The relationship of some future cyborg to the literature of today will be much like the relationship of today's Iraqi War vet with a sophisticated leg prosthesis to "The Six Million Dollar Man" -- one of bemused, amused, estranged, partial yes-but recognition.

Posted by: Benjamin Rosenbaum at February 26, 2012 09:58 AM

I agree -- or at least hope -- that "scanning people's brains into computers" will be obsolete as a trope by 2040. Because I am sure tired of it now (as I've noted elsewhere.)

My confusion was that you said "uploaded-Extropian singularity" future, and then noted that "uploaded" was going to sound as old-fashioned as "hi-fi." In the first usage, "uploaded" described people whose brains were scanned into computers; in the second, it describes files copied using DropBox or YouSendIt. I think your footnote should have said something like, "uploaded" will sound as old-fashioned as "tractor beam."

Posted by: Ted at February 26, 2012 11:31 PM

Interesting. It seems to me that the "we put our brains in computers thing" is often described by the verb "upload" as shorthand. The "we put our brains in computers" thing is vague and ill-thought out and tropey to begin with, but in addition, saddling it with the verb "upload" -- a verb taken from copying files -- is going to become increasingly funny as that verb completes the transition from shiny and sexy to quaint and archaic.

Posted by: Benjamin Rosenbaum at February 27, 2012 08:30 AM

Now I'm wondering whether kids still say they dial their phones. I still dial a phone when I punch the numbers, but I remember rotary phones. I was going to say that upload could easily still be in use, having changed its meaning just as dial did (or sending email, I suppose). But I don't know how long dial is really going to hang on, and now I'm all focused on that.

Thanks,
-V.

Posted by: Vardibidian at February 27, 2012 04:05 PM

Phone numbers are for people whose phones don't have address books. I can't remember the last time I heard a verb attached to the use of one. "I'll call Bob. What's his number?"

Posted by: David Moles at February 27, 2012 04:39 PM

On a side note, playing The Old Republic and reading Darths and Droids has made me acutely aware of the social, enconomic and ethical lacunae in the Star Wars universe and seriously tempted me to write an old-school galactic empire story. But there's no way I could imagine such a story happening iour universe, either in our future or a long time ago in a galaxy far, far away.

Posted by: David Moles at February 27, 2012 04:45 PM

Whoops. iPad offers false dichotomy between typos and auto-correct-os, film at 11.

Posted by: David Moles at February 27, 2012 04:46 PM

I remember that as a kid, I always preferred *Wars to *Trek precisely because *Wars was overtly and obviously fantasy with some wacky spaceyness attached to it, while *Trek had the awkward pretense of being science fiction. I would fume at all the sciencey-sounding technobabble being employed on a show that allowed cross-breeding between humans and random alien species with green blood (yes, I know they eventually retconned this somehow, but please, people...); whereas I would have easily accepted human/alien crossbreeds had there been any in *Wars because it was so clearly set in a universe that had none of our laws of physics or biology (or economics, perhaps, but I wasn't paying attention to that then). By the same token, if *Wars had not already shark-jumped due to the introduction of Jar-Jar, it would have done so as soon as they started going on about midochlorians (and doesn't that sound like George Lucas had heard the word "mitochondria" once at a party a few decades before...?), breaking the fantasy-in-space contract with the viewer by putting on sfnal airs...

But of course YMMV. It all depends what game you think the author is playing; hell knows no fury like a fan who expected one set of genre pleasures and got a different one. I began to warm a little to *Trek when the TNG movies came out and I began to allow that they might be executing passable philosophical-conundrum-with-space-backdrop, as opposed to bad speculative fiction. And by the time of the reboot, interestingly, I was willing to regard the absurd scientficial trappings as simply the weird, arbitrary constraints of an artistic game, and appreciate what they were doing inside of them...

So I should probably make very clear that I'm not trying to be consistent in my aesthetic reactions here! So much hinges on expectations...

Darths and Droids is pretty hilarious!

Posted by: Benjamin Rosenbaum at February 27, 2012 05:10 PM

The first two teenagers I asked say that they do use dial, perhaps in a structure along the lines of I've got the number (for a business, probably), can you ____ it for me.

I realize this is not entirely on topic.

Thanks,
-V.

Posted by: Vardibidian at February 28, 2012 11:09 PM

So I guess I should have said, "upload" will be old-fashioned in 2040 -- either obviously so, like "hi-fi", or else upon reflection, because it's so casually ingrained that it's been repurposed to metaphorically represent something totally different from its original technological context, like "dial".

Ted, I was really interested to read your link, that GoH speech you gave about folk biology and brain != computer. Generally I applaud it -- it needs to be said often. A couple of interesting things I'd like to poke at in it.

First: what about neural nets? Unlike a steam engine or a book or an IBM PC or a Cray, a neural net is actually designed to be like the brain, after studying the brain -- does that change the equation? (And neural nets, of course, function very differently from what we think of as the way "computers" work -- they wouldn't suggest most of the misleading inferences you cite). Obviously neural nets are a very distant abstraction and only a baby step in the direction of the brain, but does it make a difference when we start building things in conscious imitation of the design of the brain as opposed to just taking our most successful technology and imagining the brain in its terms.

Second, where is the line between "is like" and "serves as"? Horses are nothing like internal combustion engines. But despite that, internal combustion engines have replaced horses in 99% of the economic and social roles they served in 1800 -- not just in practical things like transport, travel, and war, but even in subtle things like as hallmarks of masculinity -- equestrianism has gone from being part of the fundamental definition of a powerful man, to being a Girl Thing; the young wastrels showing off their supposed knowledge of horseflesh in Middlemarch would now be talking about cars. The only niches horses still occupy are the ones where it matters that they are *mammals* -- that they're warm and muscly and furry (think horses-as-pets, therapy modalities and erotic significance) -- or their associations with the past (and thus with elite traditions).

So there's clearly a separation of internals from function. It's one thing to say that the brain is actually like a computer, and another to say that we might end up using computing for much of what we use brains for. Consider for instance a path to "scanning yourself into a computer" which has less to do with dissecting the actual wetware, and more with training something to imitate you, to the point where your friends, associates and survivors can't notice much difference -- where it passes, if you will, a personalized version of the Turing Test. Under the hood, it need be no more brainlike than a V8 engine is horselike -- indeed many differences might be as overt as the differences between horses and cars -- but we may find ourselves caring as surprisingly little about those differences as we do about the differences between horses and cars.

I'm awfully skeptical that this will happen any time soon, mind you; but I'm playing devil's advocate because it seems a bit to extreme to characterize all the extropian "uploading" literature as being purely a product of a folk biology misunderstanding...

Posted by: Benjamin Rosenbaum at March 1, 2012 07:54 PM

Regarding neural nets: yes, a computer can model a biological brain, because a computer can simulate anything. But the fact that a computer can model a hurricane never leads anyone to suggest that hurricanes are best understood as computers. As I say in the talk, I believe strong AI is possible, and neural nets could be a step in that direction. But when people talk about the brain as a computer, they aren't thinking about neural nets; they're thinking about conventional von Neumann-architecture systems, with RAM and a CPU. (Also, when the phrase "neural net" is used in SF, it's almost always used strictly as a buzzword.)

I agree that we might end up using computing for much of what we currently use brains for, the same way that we rely on books as a substitute for human memory. But that is not what the vast majority of extropian/singularity speculation is about. No one ever argued that the Library of Alexandria would become self-aware once it contained a sufficient number of scrolls, but extropians are doing almost exactly that when they talk about computers waking up once they're capable of performing as many operations per second as a human brain. And they do so, I contend, because of a folk-biology misunderstanding.

Posted by: Ted at March 3, 2012 08:44 PM

I agree that the "pure" variant of "once it has enough connections, it spontaneously wakes up!" is utterly a misunderstanding based on folk biology. I think, though, that the variant "something very complex that we train to emulate most of the activities of consciousness spontaneously develops consciousness" is a somewhat different case. If you put "simply having enough circuits = consciousness" on one end of the spectrum -- a giant, empty Neumann-architecture with blank RAM, you simply flip the switch and it starts talking in Voiceover Of God -- and on the other end of the spectrum you put "The Lifecycle of Software Objects" -- machine consciousness as a painstakingly developed, heartrendingly fragile one-on-one project -- it seems to me most extropian/singularitarian stories are somewhere in the middle.

I think that these guys are excited about the frisson of "emergent consciousness" as an *unintended* consequence, not a planned project, and that therefore there's some aspect of "it wakes up!" But this doesn't have to be wholly naive folk biology. It might be that various elements and capacities of consciousness are developed separately -- even painstakingly as per "Lifecycle" -- and the "surprise" is only when they at some point accidentally combine to make a whole greater than the sum of their parts.

I note that Vinge's original paper, which you and I traditionally like to cite, does not involve any "enough connections and suddenly it wakes up!" element, but rather leaves plenty of room for the "development of superhuman intelligence" to be an arduous and purposeful process. Indeed there need not be any dramatic "takeover" by the intelligent computer, it seems rather to be implied that the same economic forces which incentivized developing the computer of intelligence N lead to asking it to develop the computer of intelligence N+1.

(I also note that while, in that paper, Vinge leaves us the range 2005-2030 for the Singularity to occur, at one point within it he places a more casual offhand bet of next year: "And for all my rampant technological optimism, sometimes I think I'd be more comfortable if I were regarding these transcendental events from one thousand years remove ... instead of twenty." That's 2013! I really hope someone commissions Vinge to write a followup paper in 2030...)

I think the "folk biology" argument you're making is accurate, but I think it's insufficient to just take vague swipes at "extropians" without citing which stories you mean. Because if you're just talking about random internet posters in brain-freezing forums, that's a weaker argument than if you really find this folk biology among the thought leaders: in Kurzweill's essays or in the stories of Vinge, Egan, Stross, Rucker, Reed, et al. My own impression is that you're more likely to find "enough circuits and it wakes up!" in Golden Age sf writers, and that the writers in the Singularity genre generally are talking about neural nets, further breakthroughs in strong AI, programmatic development of machine consciousness, etc. (Maybe the new Kessel & Kelly antho would be a good place for us to look...)

I will allow that there are two stories I can think of where I'm guilty of explicitly referencing Neumann architectures to describe posthuman consciousness -- "True Names" and "The House Beyond Your Sky" -- though in my defense I will say that in the first case I was trying to be funny, and in the second case to explicitly mark the Neumannesque language as being technological metaphors employed in the same way as the technologies "clay hut" and "parakeet cage"...

Posted by: Benjamin Rosenbaum at March 4, 2012 08:42 AM

Okay, some specifics. I consider Egan to be anti-extropian, but among the others:

-- In Vinge's original paper, he explicitly says "Large computer networks ... may 'wake up' as a superhumanly intelligent entity." He also says, "The precipitating event will likely be unexpected -- perhaps even to the researchers involved. ('But all our previous models were catatonic! We were just tweaking some parameters....') If networking is widespread enough (into ubiquitous embedded systems), it may seem as if our artifacts as a whole had suddenly wakened."

-- Kurzweil makes very explicit calculations about the processing power of the human brain and the point at which computers will exceed it. In his essay "The Law of Accelerating Returns," he says "We achieve one Human Brain capability (2 * 10^16 cps) for $1,000 around the year 2023."

-- In Charlie Stross's Accelerando, complex entities are constantly becoming self-aware without anyone's help: "It's hard for an upload to stay subsentient these days, even if it's just a crustacean." "Probably another semisapient class-action lawsuit." "The Slug itself is one such self-aware scam, probably a pyramid scheme or even an entire compressed junk bond market in heavy recession."

I think, however, that Charlie has said that he doesn't actually believe in the future he depicted in Accelerando, and that his more recent novel Halting State more closely reflects his views on a realistic future.

Which brings us to the use of tropes. Brain uploading and computers that wake up are becoming stock props; writers use them not because they necessarily believe them, but because they're convenient.* As I said in my original piece, I'm complaining about SF being familiar instead of mind-expanding. So, there's a sense in which my complaint is the same as your complaint.

---
* This doesn't excuse Vinge or Kurzweil in their nonfiction.

Posted by: Ted at March 5, 2012 03:08 AM

Yes, I think they're variants of the same complaint.

Comedy is a different angle; I think I read a lot of the lines you quote in Accelerando as being as much satire of extropianism as earnest extropianism. I mean, they are facts in the world of the story, but the worldbuilding of the story has an over-the-top, edge-of-farce tone.

I'll grant that the Vinge quotes are pretty damning on "it just woke up!" I'm not as convinced that Kurzweil's "Human Brain capability" is damning on the same axis. I think Kurzweil is wildly optimistic about the difficulty of programming machine consciousness, but that's somewhat different than saying that he believes machine consciousness will occur accidentally without anyone programming it on purpose. In the scenarios he gives in his books there are generally clear step-by-step ways we would approach machine consciousness. I think he's saying "we'll have so much processing power we'll be able to program strong AI, and market forces will make it attractive for us to program it, for a variety of applications... memory and cognition prostheses for the brain damaged, humanlike user interfaces, etc etc". On the timeline he's suggesting, I think this is silly (his equation of processing speed with human brain power for one), but it seems to me to be not nearly as silly as "throw enough transistors on the chip and it wakes up!"

Why do you consider Egan anti-extropian? I don't see the "it wakes up!" meme in Egan, but I see (particularly in Diaspora) a great deal of "being programmed posthumans means we can mathematically optimize away mental illness, superstition, meanness, bad manners, and opinions the author finds onerous", which I consider an equally annoying extropian meme.

Posted by: Benjamin Rosenbaum at March 5, 2012 11:04 AM

To clarify: it's not necessary for Kurzweil to think strong AI will happen spontaneously for him to be committing a folk-biology fallacy. The fact that he thinks the salient properties of the human brain can be measured in operations per second is sufficient evidence that he can only see the brain as a computer. The fact that he has estimated that the brain is equivalent to one million lines of source code is further indication that he is wearing very narrow computer-science blinders.

As for Egan, he has mocked the idea of the singularity in a lot of his fiction; see his novel Zendegi as the most recent example. I don't know if the values espoused in Diaspora qualify him as an extropian, but that's largely a separate issue (I never actually used the word "extropian" in my original talk). I think Egan acknowledges the differences between brains and computers more than most SF writers working in that territory.

Posted by: Ted at March 6, 2012 11:49 AM
<< Previous Entry
To Index
Next Entry >>