• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Things That Science Fiction Got Wrong…

This was probably for dramatic effect and avoiding lengthy exposition rather than a "mistaken prediction." Back in the 50s and 60s most readers could probably grasp the idea of examining and fixing complicated "wiring" better than they could understand the nature of software.
The stories of "I, Robot" collection were written in 1940's (published as a book in 1950). Back then there was no such thing as "software" -- computers were programmed by physically moving switches. So Asimov can be forgiven for not anticipating it, IMO.

More jarring, to me at least, was:
Arthur C. Clarke and Stanley Kubrick said:
Good afternoon, gentlemen. I am a HAL 9000 computer. I became operational at the H.A.L. plant in Urbana, Illinois, on the 12th January 1997. My instructor was Mr Langley, and he taught me to sing a song. If you'd like to hear it, I can sing it for you.

When "2001: Space Odyssey" was filmed, hardly anyone in the world understood what software is -- that it is endlessly malleable, easy to replicate, and largely independent of hardware. The above quote meant, without any doubt, that the specific computer was turned on (and presumably started learning) on the 12th January 1997. The idea that once that learning was complete, HAL's memory content could be copied into another HAL simply would not occur to 1969 movie audience -- let alone the idea that a program is sentient, not the "computer". What made it jarring to me is realization that Clarke himself almost certainly understood what software is, but went for the simplified depiction.

When did SF writers first become aware of the difference between hardware and software, and when did the concept of "sentient program" arise?
 
The stories of "I, Robot" collection were written in 1940's (published as a book in 1950). Back then there was no such thing as "software" -- computers were programmed by physically moving switches. So Asimov can be forgiven for not anticipating it, IMO.



When did SF writers first become aware of the difference between hardware and software, and when did the concept of "sentient program" arise?

Prior to the term 'software', many science fiction writers and scientists understood the concept, but thought very little of it because it was so boring. They didn't anticipate the rise in computing engendered by electricity. Charles Babbage and Ada Lovelace wrote about software, but they didn't use the term. Babbage's hypothetical "analytical engine" had a form of software. They knew Mary Shelley of Frankenstein fame, so the concept was around by 1850 in science fiction circles.

Alan Turing articulated the concept thoroughly when he developed the "Von Neumann" architecture(why it isn't named after Turing is another, bitter question).

People propose hypothetical new architectures today for "quantum computing" and these could eventually have so many applications that civilization would be forever altered. Can you imagine trying to predict how it would change our society though?
 
Last edited:
Myriad,

I got the impression that Positronic robots were not coded. More like what we now call artificial neural networks.


Possibly. The references to "positronic pathways" with the implication that tracing those pathways would help understand the robot's high-level decision making processes honestly doesn't fit very well to either a neural network or a digital computer model. But neural network might be a bit closer. In at least one story the roboticists spoke of robot decision making as "the potential corresponding to action X based became higher than the potential for action Y, so the robot did X." Which suggests analog circuitry -- but does not prove it because though "potential" sounds like it refers to an actual electric (positronic) potential (voltage), it could also be something more abstract like a computed quantity in a program.

The stories of "I, Robot" collection were written in 1940's (published as a book in 1950). Back then there was no such thing as "software" -- computers were programmed by physically moving switches. So Asimov can be forgiven for not anticipating it, IMO.


Interesting point. Even the settings on a bank of switches is a program, that can be moved from computer to computer, and which can cause malfunction by being wrong. Once set, switches become the same thing as ROM, and since robots didn't have such switches the corresponding settings for a robot brain would be hardwired (or at least ROM chipped) instead. So a robot brain is not a general-purpose computer at all. Which is why Asimov didn't call them computers.

More jarring, to me at least, was:

When "2001: Space Odyssey" was filmed, hardly anyone in the world understood what software is -- that it is endlessly malleable, easy to replicate, and largely independent of hardware. The above quote meant, without any doubt, that the specific computer was turned on (and presumably started learning) on the 12th January 1997. The idea that once that learning was complete, HAL's memory content could be copied into another HAL simply would not occur to 1969 movie audience -- let alone the idea that a program is sentient, not the "computer". What made it jarring to me is realization that Clarke himself almost certainly understood what software is, but went for the simplified depiction.

When did SF writers first become aware of the difference between hardware and software, and when did the concept of "sentient program" arise?


It took a long time for the difference to be appreciated as important enough to actually figure in a story. One possible benchmark is James P. Hogan's The Two Faces of Tomorrow. The protagonists are concerned that the world's master computer system (yes, it has one) is showing signs of quirky (sometimes destructive) lateral thinking. They decide to test what would happen if it became necessary to shut it down, but since they expect it to attempt to defend itself, they perform the test on a different computer, in isolation on a space station.

A sort of robot war ensues aboard the station as the computer rapidly evolves its thought processes and gets better at defending itself, but just in time the computer becomes sentient enough to realize that it shouldn't kill the humans. So disaster on the station is averted but the original problem still exists, because they can't be sure that earth's computer would evolve in the same way to come to the same realizations. Finally, someone points out that they don't need to let that happen, they can just transfer the already existing sentient computer program from the test computer to the master computer back on earth.


The point is, that characters had to figure out that possibility during the story. It's presented as a new discovery or at least a clever idea they had to ingeniously come up with, rather than obvious common knowledge. That novel was published in 1979.

Respectfully,
Myriad
 
I think even that's overstating his vision.

The first proposal for a tablet-like device was the http://en.wikipedia.org/wiki/Dynabook, proposed in 1968, a whole decade before Clarke's writing about it.

There are tablets in "2001 - A Space Odyssey". You remember the screens Poole and Bowman had lying on the table while eating and watching the BBC interview? They're not table mounted screens -- in fact, Poole puts one down when he comes in before he gets his food. Of course, they did not move them while they showed moving images, as the technique used was a 35 mm projector behind the screen, which requires a lot of space, but the concept was that they had portable displays.

Now, I don't know who came up with the concept of portable screens for the movie, but Clarke wrote it, and the concept was already there when they planned and built all those props in the mid '60s.
 
Does it bother anyone that death in the Star Trek series was so "sanitary"? Kirk shoots a bad guy with a phaser that is supposed to vaporized him and he disappears into nothing; no blood, corpse or stink. Imagine vaporizing 150 pounds of something that is mostly water within a second or two. Now imagine that you are close to the victim; how does Kirk avoid the steam burns that are likely to accompany a high energy event like that?

I guess if the writers did not leave out the gore, then the censors would. :)

Ranb
 
After the AH-56A fiasco, I'd heard someone say the primary designer had used his 5 inch slide for the computations.
Some wit said he should have used the 10 inch version. :)
Imagine doing anything as complex as stress analysis on a 5 inch slip stick today... :eye-poppi
Solving fifth order lateral stability equations on a Monroe desk calculator was quite a noisy chore, with all those coefficients to plug in and punch the button to start it chunking away.
 
Flesh sexbots with robot brains. I was certain they'd be real by this time.
Or at least virtual reality.

"“If some unemployed punk in New Jersey, can get a cassette to make love to Elle McPherson for 19.95, this virtual reality stuff is going to make crack look like Sanka.”
---Dennis Miller
 
Or at least virtual reality.

"“If some unemployed punk in New Jersey, can get a cassette to make love to Elle McPherson for 19.95, this virtual reality stuff is going to make crack look like Sanka.”
---Dennis Miller

Virtual reality was all the rage 15 years ago and I remember watching a programme featuring Jaron Lanier, then a pioneer of VR, in which he suggested that VR would be forgotten for a decade or two before becoming almost unnoticed a central part of our lives. Xbox kinect isnt there yet but if you look at the PC kinect homebrew software I think we are only a few years from mass market VR.
 
You gotta love the old SF movies which showed computers of the future which were obviously punchcard sorters circa 1965.
 
Virtual reality was all the rage 15 years ago and I remember watching a programme featuring Jaron Lanier, then a pioneer of VR, in which he suggested that VR would be forgotten for a decade or two before becoming almost unnoticed a central part of our lives. Xbox kinect isnt there yet but if you look at the PC kinect homebrew software I think we are only a few years from mass market VR.

You're probably not far off the mark. Look at the OTHER component needed for VR - the virtual world to interact with; and then consider how social networks and MMORPG are becoming more and more full fledged worlds unto themselves.

It's only a matter of a few hardware/software generations before the technologies meet in the middle and our avatars are standing next to each other in an online world while we control them with our own movements in our living rooms.

The REALLY hard part is probably going to be the physical feedback mechanism (well, that and the non-visual, non-auditory stuff)
 
The stories of "I, Robot" collection were written in 1940's (published as a book in 1950). Back then there was no such thing as "software" -- computers were programmed by physically moving switches. So Asimov can be forgiven for not anticipating it, IMO.

More jarring, to me at least, was:


When "2001: Space Odyssey" was filmed, hardly anyone in the world understood what software is -- that it is endlessly malleable, easy to replicate, and largely independent of hardware. The above quote meant, without any doubt, that the specific computer was turned on (and presumably started learning) on the 12th January 1997. The idea that once that learning was complete, HAL's memory content could be copied into another HAL simply would not occur to 1969 movie audience -- let alone the idea that a program is sentient, not the "computer". What made it jarring to me is realization that Clarke himself almost certainly understood what software is, but went for the simplified depiction.

When did SF writers first become aware of the difference between hardware and software, and when did the concept of "sentient program" arise?

Every once in a while, while listening to MP3's on my PC in the tiny hours, I get that little creepy feeling... Hal... is that you?

Then, when my Mom and I were watching 2001, a Space Oddessy on my PC's DVD drive, we looked at each other. We are watching a movie about a computer run-a-muck on a computer...um...ah...ahem. :boxedin:
 
Along the lines of the hardware-software focus, I remember reading several stories from different writers where changing routes in space would take several minutes or hours of "reprogramming" the computer by the navigator. I always found this a ridiculous idea. What's the use of a computer if the humans have to program it every time again? Just tell it where you want to go and let the computer do the calculations!

I guess these were also stories written when computers were not as widespread as 20 years ago, when I read most of these stories. But still a serious flaw.
 
The stories of "I, Robot" collection were written in 1940's (published as a book in 1950). Back then there was no such thing as "software" -- computers were programmed by physically moving switches. So Asimov can be forgiven for not anticipating it, IMO.

More jarring, to me at least, was:


When "2001: Space Odyssey" was filmed, hardly anyone in the world understood what software is -- that it is endlessly malleable, easy to replicate, and largely independent of hardware. The above quote meant, without any doubt, that the specific computer was turned on (and presumably started learning) on the 12th January 1997. The idea that once that learning was complete, HAL's memory content could be copied into another HAL simply would not occur to 1969 movie audience -- let alone the idea that a program is sentient, not the "computer". What made it jarring to me is realization that Clarke himself almost certainly understood what software is, but went for the simplified depiction.

When did SF writers first become aware of the difference between hardware and software, and when did the concept of "sentient program" arise?


In various projects I worked on in the past, we'd start up the computer (a radio actually) let it boot, shut it down gracefully, then golden off that image for manufacturing.

In HAL's case, that would correspond to the above described "memory" of said initial boot, which would be cloned off. IIRC, they had at least one parallel-running HAL back on Earth, best bet to be cloned from the exact same post-initial-boot image.
 
You gotta love the old SF movies which showed computers of the future which were obviously punchcard sorters circa 1965.

Lost In Space was big on "computers", which I've now learned were just the giant, reel-to-reel tape units. They didn't have much RAM, so computation was "data processing", meaning mostly treating the tape the way we'd treat RAM -- the actual RAM was a tiny workspace to juggle tiny pieces at a time.

In the late '80s I recall learning sorting and merging algorithms for large amounts of records, depending on whether you had access to 1, 2, or 3 tapes at at time. Yet we were already approaching the time you'd just load the millions of records into RAM, sort it, then write it back out again.
 
Last edited:
Along the lines of the hardware-software focus, I remember reading several stories from different writers where changing routes in space would take several minutes or hours of "reprogramming" the computer by the navigator. I always found this a ridiculous idea. What's the use of a computer if the humans have to program it every time again? Just tell it where you want to go and let the computer do the calculations!

I guess these were also stories written when computers were not as widespread as 20 years ago, when I read most of these stories. But still a serious flaw.
Yet that was exactly what they had to do with the Apollo missions. The program was changed for each burn. We have more computing power in our wristwatches than they had then.
 
In fairness to Sci-Fi authors they have to sell their books to todays readers so they have to compromise and put human beings at the centre of their narrative. For instance most Science Fiction stories of the far future tell of humans flying spaceships but most of us accept that within a few decades computers will be flying all planes without the aid of humans.

Even if authors knew exactly what was coming they couldnt base a story on it because it wouldnt sell.
To expand on my previous response, two examples from my favorite current SF writer, Alastair Reynolds:

In the next-to-last (I think) chapter of "The Prefect", protagonists are in an underground bunker being fired on. "Missile launch detected" says calm female computer voice. "Missile engaged. Intercept. Second missile launch detected. Second missile engaged. Intercept. Third missile launch detected. Third missile engaged. Partial intercept," and the bunker shudders with a near-miss. "Defensive capabitlity degraded by one third. Fourth missile launch detected. Fourth missile engaged. Partial intercept," and another groundshaking explosion. "Defensive capabitlity degraded by another third. Fifth missile launch detected. Sixth missile launch detected" -- and protagonists can do nothing but wait for their inevitable doom as computers on both sides already calculated the entire minimax tree, know exactly how it will end, and are going through the motions only because "resign" is not an option. I found it terrifying and powerful storytelling.

OTOH, transhuman fiction, where characters behave in fundamentally different ways from Human version 1.0 is gaining popularity. It is a niche market, but it exists.

One failry obvious example is volutary control of one's emotions and impulses. IIRC, in "Lawrence of Arabia" the main character says "A man can do whatever he wants, but he cannot decide what he wants. This decides what he wants" -- and he pinches his flesh. Well, it seems faily plausible that within a century or so neurochmistry will enable people to decide what they want. Object of your affection is uninterested? Modify your brain so you become attracted to someone who is. Or turn off your sex drive altogether until your degree is completed, as it is an unnecessary distraction. There is SF by now which explores things like that.
The main character of "Galactic North" is named Irravel Veda, and she is a passenger spaceship captain. As a captain, she is supposed to protect the lives of her (frozen) passengers at all costs, including her own life if necessary. This dedication to duty is enforced by tweaking her brain so that emotions a woman normally has toward her children are directed toward Irravel's passengers. She knew what she was doing when she submitted to brain-tweaking and is fully aware that her overwhelming love for her passengers is artificially induced, but that makes no difference. She DOES love these frozen statues as much as any woman ever loved her baby, and goes to impossible lengths to protect them from harm. Which is a necessary employment condition, in her case. And again, a very powerful story even though it has no parallels in present world.

But may have, in not too distant future.
 
Long long before computers, men were expected to give their lives for their job.
Way back when, when I was wearing knickerbockers...but never buckled -below the knee-.. I wasn't any rebel.. I heard some cowboy/folksinger come out with...
"When danger is near
it takes a brave engineer
to give up his life
to be on time"
Some yodel about a train wreck.
To my easily malleable mind came this thought...
If the train is wrecked, it's not getting anywhere on time!
Possibly this helped strengthen my native caution about plunging ahead heedlessly, and to consider options before a commitment.:o
 
My "favorite" was the computers that had to be laboriously programmed by consulting reference books, then produced results that then had to be laboriously translated into something humans could understand, once again by consulting reference books. Even at that time as a kid, I thought, jeeze, couldn't the computer have done that for you to begin with?

I think that trope was influenced by the Big Iron theory of computers, where an institution or company had ONE computer set somewhere in its headquarters, usually displayed behind massive glass walls that ran from floor to ceiling, attended to by acolytes dressed in white coats. You approached The Programmer with your request (if you were lucky enough to be allowed access to The Computer), who would ask painfully detailed questions, scribble what looked like heiroglyphical gibberish on a specially-printed pad of sheets, then send them off to have the cards punched (sort of like Moses waiting for the tablets to be inscribed), and then waited the days or weeks before your job would be Submitted to The Computer, after which the acolytes would deliver a stack of 14" wide tractor feed paper inscribed with row after row of data, which you then had to figure out what it implied.

You know, come to think of it, the SF writers pretty much had computing nailed for their time.

Beanbag
 
Beanbag, I recall plainly when having a computer process anything other than numbers was an exercise in pain. FORTRAN did not have a CHARACTER type until FORTRAN-77 (though of course some compilers had it, the basis for the standard.)
 

Back
Top Bottom