• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Robot consciousness

Anything missing here?

You missed the one I listed earlier in this thread: The interactions of humans and computers on a global internet form a level of consciousness that is unrecognized by the individual components.
 
Could someone recommend to me a book with general overview of current
state of AI? Basically I have in mind something like Goedel, Echer
and Bach but updated to current state of research.


As far as I could gather there are couple of possible scenarios that humankind could develop AI:

1. Someone sits down and writes slick software that is capable of learning. Then we teach it like a kid and somehow it becomes self-aware.

2a. Viruses with self-changing abilities and/or mutations evolve AI in ever increasing Internet.

2b. Cellular automaton, some game of life, in very complex virtual word with competition for resources that evolves ever increasing virtual life, and ultimately becomes self-aware.

2c. With time and Moore's law google or such with all of it's services will get so complicated and software on its zillion parallel CPUs just sparks self-awarness.

3. Imitate or simulate human brain.

Anything missing here?
Dare to bet the winner? I would pick 1 or 3...
Is there at least some development that could rule out one of these scenarios?


Thanks!

There's a problem with the phrase "self-aware". We want to say we know what it means, but when pressed, the definition tends to be tricky. On a literal reading, Some cars could be considered self-aware because they have sensors that monitor their internal state, and redundant sensors that monitor the state of the sensor systems.

But in the interests of playing along, I'll ignore that issue...

My personal take:

1. Assuming hardware architecture similar to today's only more powerful: I think it's unlikely, because complex, hand-written software tends to be rigid, bug-prone (especially in terms of regression errors), and limited by the preconceptions of the author. This is a top-down approach, and my gut tells me AI will somehow spark from the bottom up.

2a. How would we recognize it? We can't assume that a self-organized intelligence would necessarily be anything like ours. It may not recognize that there are intelligent agents "outside" its own system. In any case, I don't think the environment is fluid or variable enough.

2b. Totally possible, but to have an interesting sort of intelligence, your virtual world needs to be VERY rich and dynamic, so that the CAs don't fall into relatively simple stable patterns and interesting behaviors are allowed to form. In some sense, it's just shifting the programming burden.

2c. How's this different from 2a?

3. Can we leave out the word "human"? I think it will be easiest to coax an intelligence into existence if we make a cozy, dynamic, open-ended, mutable environment (artificial neurons in a fluid environment, maybe, which allows them to form, break, and re-form connections). Then we seed the environment with genetic algorithms, give it input sensors, output actuators and goals, and let it run for a while. I think the literature would call this a cybernetic (meaning control-oriented, not human/digital hybrid) connectionist architecture.
 
Y'all would make a lot more sense if you paid attention to what consciousness is and how it's produced.

This more than anything seals the deal that I won't be continuing the debate. Your arrogance is breathtaking.
 
Could someone recommend to me a book with general overview of current
state of AI? Basically I have in mind something like Goedel, Echer
and Bach but updated to current state of research.


As far as I could gather there are couple of possible scenarios that humankind could develop AI:

1. Someone sits down and writes slick software that is capable of learning. Then we teach it like a kid and somehow it becomes self-aware.

2a. Viruses with self-changing abilities and/or mutations evolve AI in ever increasing Internet.

2b. Cellular automaton, some game of life, in very complex virtual word with competition for resources that evolves ever increasing virtual life, and ultimately becomes self-aware.

2c. With time and Moore's law google or such with all of it's services will get so complicated and software on its zillion parallel CPUs just sparks self-awarness.

3. Imitate or simulate human brain.

Anything missing here?
Dare to bet the winner? I would pick 1 or 3...
Is there at least some development that could rule out one of these scenarios?


Thanks!

You missed a big one -- an expert system used by a large corporation becoming self aware.
 
The way I see it, consciousness is like digging the ditch. It requires the software, a substrate to run the software, and a mechanical component to make the activity occur. You could also compare it to playing the movie off the DVD, as I've done before.

What are you trying to say? Has anybody claimed that logic could be implemented without a physical substrate? Pencil and paper is the physical substrate in the OP.

As for playing the DVD, there is input, there is output and there is logic in between. The input usually comes from a laser bouncing off the spinning disk, the output is usually displayed on a monitor screen and the logic is usually provided a fast computer algorithm or custom logic chips.

But none of that electronic kit is necessary. The input could be provided by reading the pits on the disk with a microscope, the output could be sheets of paper colored by crayon and the logic provided by a room of monks using pencils and paper.

Your DVD analogy proves the OP's contention that the logic can be slowed down to any speed.
 
You missed the one I listed earlier in this thread: The interactions of humans and computers on a global internet form a level of consciousness that is unrecognized by the individual components.

Have you read "Rainbow's End?"
 
This sounds like a variant of the China Brain. Good catch, BTW.

It is different than that, in that each person isn't aware that they are part of a larger construct. Which is why you should read Rainbow's End -- a large part of the story has to do with exactly such a thing, and it won a Hugo so why not?

The china brain is more along the lines of what I am talking about with lots of monks and quills, explicitly simulating neurons.
 
You missed the one I listed earlier in this thread: The interactions of humans and computers on a global internet form a level of consciousness that is unrecognized by the individual components.
Indeed, I missed that one. Perhaps because I don't like it :)
 
There's a problem with the phrase "self-aware". We want to say we know what it means, but when pressed, the definition tends to be tricky.
Sure. Buuut, should that stop further thinking about the subject?

1. Assuming hardware architecture similar to today's only more powerful: I think it's unlikely, because complex, hand-written software tends to be rigid, bug-prone (especially in terms of regression errors), and limited by the preconceptions of the author. This is a top-down approach, and my gut tells me AI will somehow spark from the bottom up.
Yes, all that is true.
Still, software is not limited by it's human author.
There are examples where programs beat their human authors in some logical games, and beat them in novel ways!



2c. How's this different from 2a?
Not too much different. If you notice I intentionally placed then under same number "2".

3. Can we leave out the word "human"?
Fair enough.
 
You missed a big one -- an expert system used by a large corporation becoming self aware.
Hmmm. How is that different from:
2c. With time and Moore's law google or such with all of it's services will get so complicated and software on its zillion parallel CPUs just sparks self-awarness.
But I agree, "expert system" is bit more precise.
 
Sure. Buuut, should that stop further thinking about the subject?

Not at all. Just pointing out the edge of the cliff we might fall over if we're not careful.

Yes, all that is true.
Still, software is not limited by it's human author.
There are examples where programs beat their human authors in some logical games, and beat them in novel ways!

I took your first scenario to mean that the software was inherently limited by its human author (GIGO), otherwise it's just a riff on your other scenarios, wherein the software is largely self-written or involves GAs or something.
 
Hmmm. How is that different from:

But I agree, "expert system" is bit more precise.

Well, it would strongly influence the "personality" of the A.I. that emerges. Just like our own history as monkeys strongly influences all of our characteristics. Despite what some people think, it is pretty clear that every facet of human existence revolves around us being mere intelligent apes.

In the "Hyperion" novels, the first A.I. comes from viruses that kept evolving on the internet, and so all the A.I. (A.I. is actually the bad guy in that universe) happens to act like very intelligent viruses -- extremely greedy, only interested in taking over stuff, I.E. borg like.

In "Accelerando," most of the A.I. evolve from corporate instruments and expert systems, so they pretty much only care about making more money. Can you imagine a godlike intelligence that only cares about $$$? It is kind of interesting to think about it.

In some other books I am currently reading, the A.I. comes from something like google and it is much more benign -- and interested in humans.

In "Rainbows End" it comes from people interacting on the internet and acquires the playful characteristics of the younger generation of hackers whose communications gave rise to it.
 
Last edited:
Well, it would strongly influence the "personality" of the A.I. that emerges.
Indeed. I was just trying to make some sort of systematics. So basically I should have 2d. expert systems.

In some other books I am currently reading, the A.I. comes from something like google and it is much more benign -- and interested in humans.
Sounds like Clarke :)

In "Rainbows End" it comes from people interacting on the internet and acquires the playful characteristics of the younger generation of hackers whose communications gave rise to it.
Funny coincidence: I read it this weekend.
To me it is never exactly spelled what is that rabbit. Sure, all hints at some AI, but not directly. But the second idea that it come from young hackers is not founded. It could be remnant of DARPA becoming self-aware in now gigantic internet (something they mentioned at one place at least).
 
I took your first scenario to mean that the software was inherently limited by its human author (GIGO), otherwise it's just a riff on your other scenarios, wherein the software is largely self-written or involves GAs or something.
Hmmm. Not really. But I admit it is not easy to put a clear division line. Let me try: 1. the sofware is written with exact intention to be capable of learning and to become self-aware. While the 2d. "expert system" scenario would be with learning intention, but that self-awarness comes as an accident.
 
Hmmm. Not really. But I admit it is not easy to put a clear division line. Let me try: 1. the sofware is written with exact intention to be capable of learning and to become self-aware. While the 2d. "expert system" scenario would be with learning intention, but that self-awarness comes as an accident.

In terms of which scenario is most likely, I don't see that this changes my assessment much (though it may do so for others).

I guess to put it simply, I think that the likelihood of AI "sparking" is in direct proportion to the degree of self-organization there is.
 
But the second idea that it come from young hackers is not founded.

My impression was that Rabbit was an emergent intelligence that rode on the microtransactions between members of the hacker network.

That is why rabbit appeared to die when they shut off the certification agency the hacker network relied upon for communications.

It also explains why rabbit acted like a kid despite being uber intelligent (at one point rabbit said something like "that must be what sex feels like," which I can see a kid saying) and it also explains why the book mentions certain decisions taking "a few seconds" for rabbit -- a few seconds being just about how long it would take for a network of humans to read a broadcast message and submit their vote on what to do.

Most telling, though, is that rabbit mentions he wasn't aware of the other intelligence that he met at the library showdown. And it reads as if this other intelligence was just like rabbit, only it came from the microtransactions between the other group (where rabbit came from the scooch-a-moot worhippers).

So actually rabbit wasn't an intelligence per se, it was just the psuedo intelligence that appeared as a result of all these people being so tightly networked with each other.
 
wonder how much are mods tollerant here to offtopic...

That is why rabbit appeared to die when they shut off the certification agency the hacker network relied upon for communications.
Well, they do everything there on secure hardware (SHE). It seemed to me that every action needs identity, and that proof of identity is backed by certificates. Kill the certification agency, and that identity can't do anything in such network.
The swiss agency has large portion of total certificates, but still under 10%. Thus killing the swiss agency would be only a small dent in number of humans composing the AI, and not afecting anything.

It also explains why rabbit acted like a kid despite being uber intelligent (at one point rabbit said something like "that must be what sex feels like," which I can see a kid saying)
Any AI will be basically wondering about corporal sex. Like any kid would do :)



Most telling, though, is that rabbit mentions he wasn't aware of the other intelligence that he met at the library showdown. And it reads as if this other intelligence was just like rabbit, only it came from the microtransactions between the other group (where rabbit came from the scooch-a-moot worhippers).
Ahhhh. I found this part with scooch-a-moot things in front of the library as horribly boring. Barely forced myself to read it. I may have skipped important clues here...



 
wonder how much are mods tollerant here to offtopic...


Well, they do everything there on secure hardware (SHE). It seemed to me that every action needs identity, and that proof of identity is backed by certificates. Kill the certification agency, and that identity can't do anything in such network.
The swiss agency has large portion of total certificates, but still under 10%. Thus killing the swiss agency would be only a small dent in number of humans composing the AI, and not afecting anything.


Any AI will be basically wondering about corporal sex. Like any kid would do :)




Ahhhh. I found this part with scooch-a-moot things in front of the library as horribly boring. Barely forced myself to read it. I may have skipped important clues here...






If I remember correctly, the scooch-a-moot people used one certification exclusively for their scooch-a-moot stuff, and that was rabbit's weakness.

But another clue in support of my theory is the number of people helping the A.I., which suggests to me that the A.I. was actually just the global consciousness of all those people to begin with. And of course the higher an individual was in the rankings, the more contribution they could make.

At first I wanted to think that rabbit was in fact a separate A.I. but the more I read the more it seemed like he was just the figurehead of a huge human network.
 

Back
Top Bottom