• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Robot consciousness

How on Earth could any result from that study possibly refute any facet of the CM?

It's not that the study refutes any facet of the model, if I understand what y'all are saying about it -- which by this point I think I do.

It's that the study shows -- together with other studies -- the simple fact that consciousness is an action of the brain which is supported by particular bio-physical activities.

I have no doubt that it is possible to build machines that also do this. The brain is an object. It's not magical.

The question is... what happens to that action (consciousness) if we slow down the data stream to a very slow rate? Will it still function?

To answer that question, we have to model the activity using that particular mechanism and see what happens.

The "IP by itself generates consciousness" approach ignores the mundane mechanical details of the system, however.

ETA: In other words, the question in the OP, when applied to the brain, is not like asking if a TM can perform a calculation at any speed, but rather it's like asking if an engine can crank and run at any idle speed. Consciousness is an action supported by specific physical activity, and we can't get an answer to our question using a model that removes those activities from our view.
 
Last edited:
Mind you, I don't find the argument remotely convincing, but I think it goes something like this:

Human consciousness is a "self-sustained reverberant state of coherent activity" (per the study)

The CM doesn't mention a "self-sustained reverberant state of coherent activity" (per CM)

Therefore the CM must be wrong. (per Piggy)

But I think what it really boils down to is this: Piggy is familiar with some neurology, and not familiar with the Computational Model, and therefore has chosen to espouse the former and reject the latter.

Incorrect.

What it boils down to is that the proposals for consciousness at any speed presented here, so far, want to pretend that we can ignore the physical mechanism responsible for consciousness by asserting (incorrectly) that the system's logic itself generates consciousness.
 
I have to make a funny face at that statement, since of anything that is exactly what the computational model is about.

As I have always said, this would make alot more sense to everyone if everyone were a computer scientist.

Y'all would make a lot more sense if you paid attention to what consciousness is and how it's produced.
 
You seem to be under the impression that the Computational Model and the "reverberant coherent state" model are mutually exclusive. They are not. The CM is a description of consciousness at a different level than the hardware level set forth in the study you cited.

For instance, the portion I quoted from the article talks about how consciousness has an object logically implies that consciousness is a form of information processing. The object is information, the processing is the coherent reverberant state.

As far as proposing methods, we've been doing that all along in this thread. The long and short of it is: it doesn't really matter as far as the question in the OP.

If you slow down some parts of the consciousness-generating system but not others, the system will most likely break, and your "what it's like" is "it's like not being conscious".

If you slow down all the physics of the consciousness-generating system but leave the inputs real-time, then your "what it's like" is probably a jumbled mess--conscious, but confused.

If you slow down the physics of the whole system, including the inputs, then then, from the point of view of the consciousness, nothing has changed, From the point of view of an outside observer, the whole thing is just running arbitrarily slowly.

I don't believe we're talking about slowing down the physics of the system, are we?

If I slow down my computer's operating speed, the physics remains unchanged.
 
There's no evidence that logic per se can generate the activity of consciousness. It's a claim that's not only unsupported, but downright bizarre, seeing as how we know of no other real-world phenomenon that can be generated ex nihilo by mere "logic".

Except you follow this statement with..

Every other activity in the real world requires some sort of physical mechanism, and one that is physically appropriate to perform the action.

... so my question is why do you think we are suggesting mere logic generates things ex nihilo?

As before, you think logic refers to some magical abstract theory out in the void. It does not. Logic is a way to describe the behavior of physical stuff. When we say logic can generate consciousness, we mean exactly that physical stuff that behaves in a way that can be described by logic can generate consciousness.

It is pretty simple, really -- you are making it out to be more complicated than intended.

The substrate has to be able to do whatever mechanically needs to be done to make the action happen.

Yep.

Comparing these things you mention with what the brain appears to do bio-mechanically when it generates conscious experience (what little we know, at least) they do not appear properly equipped to perform the task.

Well, no offense, but that is because you are viewing it from the perspective of a layperson. Obviously, neurons are very different from buckets and pulleys. But are they different in a way that matters for consciousness? The computational model says no. And to understand why, you need to be educated in computation theory. The Church-Turing thesis, for starters.
 
The question is... what happens to that action (consciousness) if we slow down the data stream to a very slow rate? Will it still function?

To answer that question, we have to model the activity using that particular mechanism and see what happens.

The "IP by itself generates consciousness" approach ignores the mundane mechanical details of the system, however.

ETA: In other words, the question in the OP, when applied to the brain, is not like asking if a TM can perform a calculation at any speed, but rather it's like asking if an engine can crank and run at any idle speed. Consciousness is an action supported by specific physical activity, and we can't get an answer to our question using a model that removes those activities from our view.

But the answers to this question are both trivial and obvious, so I don't know why you are even arguing about it. It is quite simple.

1) If you slow down the entire system, then there is no loss of function. Physics assures us of this.

2) If you slow down only part of the system, then anything can happen. Physics also assures us of this.

Furthermore I don't think anyone is actually suggesting otherwise -- I just think there is a miscommunication here. The people that say you can single step consciousness are obviously -- at least, to me it is obvious -- talking about single stepping the whole system, not just part of it.
 
... so my question is why do you think we are suggesting mere logic generates things ex nihilo?

As before, you think logic refers to some magical abstract theory out in the void. It does not. Logic is a way to describe the behavior of physical stuff. When we say logic can generate consciousness, we mean exactly that physical stuff that behaves in a way that can be described by logic can generate consciousness.

It is pretty simple, really -- you are making it out to be more complicated than intended.

Then why on God's green earth do you propose that an abbey full of monks with quills can generate consciousness?
 
Well, no offense, but that is because you are viewing it from the perspective of a layperson. Obviously, neurons are very different from buckets and pulleys. But are they different in a way that matters for consciousness? The computational model says no. And to understand why, you need to be educated in computation theory. The Church-Turing thesis, for starters.

I'm sorry, my friend, but you're the one making the error, because neurons don't generate consciousness.

The activity of neurons is transparent to the higher-level structures which do.

My suggestion is that you need to be more educated in what consciousness is and how it is generated. Then you would be able to take your systems of buckets and pulleys, and your abbey full of monks, and compare it to the systems that, as far as we can tell, actually do the work of generating consciousness and discern whether or not these other configurations are an adequate substrate to perform the actions necessary to the task.

The fact that you can't, or won't, is astounding to me.

If you did, you would see that no system of buckets and pulleys, and no abbey full of monks, is going to generate consciousness, any more than they're going to make a cell divide.
 
But the answers to this question are both trivial and obvious, so I don't know why you are even arguing about it. It is quite simple.

1) If you slow down the entire system, then there is no loss of function. Physics assures us of this.

2) If you slow down only part of the system, then anything can happen. Physics also assures us of this.

Furthermore I don't think anyone is actually suggesting otherwise -- I just think there is a miscommunication here. The people that say you can single step consciousness are obviously -- at least, to me it is obvious -- talking about single stepping the whole system, not just part of it.

What do you count as "the whole system"?

I thought the OP specified slowing down the rate at which the steps of the program are executed relative to real time, in other words, the rate at which calculations are run:

Would the robot be conscious if we ran the computer at a significantly reduced clock speed? What if we single-stepped the program? What would this consciousness be like if we hand-executed the code with pencil and paper?

I don't think this implies slowing down physics itself.

That's why I proposed the Glacial Axon Machine, which keeps the body alive and allows synapses to operate normally, but slows down signals along axons to a rate averaging 1 second from end to end.

I think that would be the equivalent of slowing down the speed at which steps in a program are performed while not slowing down time or physics.
 
Well, reports don't matter since we can program a non-conscious robot to report anything.
You would have to have decided that it's a non-conscious robot in the first place.

What criteria do you use to decide if something is conscious, for the clear-cut cases?
 
You would have to have decided that it's a non-conscious robot in the first place.

What criteria do you use to decide if something is conscious, for the clear-cut cases?

It's irrelevant to the OP, actually, b/c a conscious robot is stipulated.

As for how to decide if, say, a cuttlefish is conscious, I don't think there is a good way to decide right now.

Imo, we'll have to figure out much more about how it works, and then look at animals' brains and see if the requisite action is taking place.
 
It's irrelevant to the OP, actually, b/c a conscious robot is stipulated.

As for how to decide if, say, a cuttlefish is conscious, I don't think there is a good way to decide right now.

Imo, we'll have to figure out much more about how it works, and then look at animals' brains and see if the requisite action is taking place.

I'm not asking about other animals, special cases, or using fMRI. I want to know how you decide for yourself what is obviously conscious and what is obviously not.
 
Philosaur, when you say that I'm arguing against a version of the computational model of consciousness that no one actually believes, I think that's not quite accurate. Because it's not my intention to argue against any computational model of consciousness.

Now, if by that you mean that no one's claiming that logic alone generates consciousness, I beg to differ. I think that is what is being argued, for all practical purposes, by some posters on this thread.

Of course the logic can't be run without a substrate, everybody knows that.

But if you say that as long as the substrate runs the logic, then consciousness will emerge, absent any other mechanical component than what is required to run the logic, then we're on shaky ground.

Of course, it's true that a simulation of the brain can be performed as long the logic is run.

Lots of work is done now using computer simulations of brain activity. But I believe -- I certainly hope -- that we all agree that computer simulations of brain activity won't actually be conscious, just as a computer simulation of an airplane will never actually go flying through the air.

What we're talking about, I think we'll agree, is a stipulated conscious machine.

So we have to ask the question, "What is necessary to make that happen?"

Now, if we were to ask the question, "What is necessary to make a robot that follows commands and goes out and digs ditches?", we'd all agree that it would require a combination of logic and mechanism. It's not going to dig ditches just by running the logic. The logic has to be hooked up to some sort of mechanical arm or other feature that can actually perform the action.

On the other hand, if we want to make a machine that gives us answers to math problems, the only mechanisms we need are a keyboard and a screen or printer, and the hardware to run the logic. As long as we can enter the inputs and print or speak the outputs, then the logic is sufficient to the task we're asking this machine to do.

So here is where we differ about what consciousness is.

If I understand you correctly -- and please correct me if I'm wrong -- consciousness is much like the latter case. As long as the logic runs on a substrate capable of running the logic, as long as the software is humming along, the machine can be conscious, just as it can give answers to math problems.

The way I read the research on the brain, this is a fundamentally incorrect view of consciousness.

The way I see it, consciousness is like digging the ditch. It requires the software, a substrate to run the software, and a mechanical component to make the activity occur. You could also compare it to playing the movie off the DVD, as I've done before.

I believe that this view is consistent with research and experience, and the logic-only view is not.
 
I'm not asking about other animals, special cases, or using fMRI. I want to know how you decide for yourself what is obviously conscious and what is obviously not.

Well, pretty much my best guess at what has a similar brain.

Dogs, pigs, dolphins, elephants, cats, horses, gorillas, chimps, I think all these must be conscious. Seems to me their brains will be similar enough to ours, and will need to do tasks similar enough to what ours have to do, that they're bound to be conscious.

Insects, I don't think so. Consciousness requires a lot of resources, and I don't see them having the necessary structures to make it happen.

Where the line is... damned if I know.
 
Well, pretty much my best guess at what has a similar brain.

Dogs, pigs, dolphins, elephants, cats, horses, gorillas, chimps, I think all these must be conscious. Seems to me their brains will be similar enough to ours, and will need to do tasks similar enough to what ours have to do, that they're bound to be conscious.

Insects, I don't think so. Consciousness requires a lot of resources, and I don't see them having the necessary structures to make it happen.

Where the line is... damned if I know.
I'll try again: how do you decide in your own mind, among your own actions which to label as clearly conscious or as clearly not?

Without at least that shared criteria, the word "consciousness" is useless.
 
Then why on God's green earth do you propose that an abbey full of monks with quills can generate consciousness?

Because such a system can exhibit the physical behaviors we think are required -- and sufficient -- for consciousness.

Those behaviors being quite simply "computation."
 
If you did, you would see that no system of buckets and pulleys, and no abbey full of monks, is going to generate consciousness, any more than they're going to make a cell divide.

I am going to just ignore the rest of that mish mash of contradiction you call a response and focus on this last statement of yours.

Because I think it illustrates how fundamental your misunderstanding of this issue really is. Why should a system of buckets and pulleys make a cell divide? Really, why? What makes you say something like this?

What makes you think that the failure of buckets and pulleys to make a cell divide -- because obviously they cannot -- means anything at all in the context of this discussion?
 
Let's take the case of a person viewing a subliminal image of an apple, then a non-subliminal image of an apple.

In the former case, input is processed, associated with patterns in memory, and and stored in memory.

In the latter case, the same thing happens, but with the additional step that "I'm seeing an apple" is fed into the processors that generate conscious experience, and the viewer has the experience of seeing the apple.

Same data sets, but in the latter case an additional function is being performed.
This poses no problems whatsoever for a computer simulation of the brain. Remember: the whole system is slowed down, so the inputs that simulate visual stimuli are also slowed accordingly. Apparently, processing the stimuli reaches the stage where they are identified and, but not yet sent to the awareness unit before new conflicting stimuli have been received, which end up with another image that is eventually handled by the awareness unit, relegating the first picture to be what we call "subliminal".

There is no reason to believe that "awareness" is the only necessary part of a simulation of the brain's functions. We just focus on a distinction between "awareness" and the rest because "awareness" is the only part we can actually notice when we are conscious.
 
Could someone recommend to me a book with general overview of current
state of AI? Basically I have in mind something like Goedel, Echer
and Bach but updated to current state of research.


As far as I could gather there are couple of possible scenarios that humankind could develop AI:

1. Someone sits down and writes slick software that is capable of learning. Then we teach it like a kid and somehow it becomes self-aware.

2a. Viruses with self-changing abilities and/or mutations evolve AI in ever increasing Internet.

2b. Cellular automaton, some game of life, in very complex virtual word with competition for resources that evolves ever increasing virtual life, and ultimately becomes self-aware.

2c. With time and Moore's law google or such with all of it's services will get so complicated and software on its zillion parallel CPUs just sparks self-awarness.

3. Imitate or simulate human brain.

Anything missing here?
Dare to bet the winner? I would pick 1 or 3...
Is there at least some development that could rule out one of these scenarios?


Thanks!
 
Give me a nutshell version, then.

If this model insists that consciousness arises as a result of the logic alone, then you bet I reject it, as it is absurd on its face, and defies everything we know about the world in general and the brain in particular.

We've been trying to give a nutshell version. You keep rejecting the nutshell version because you misunderstand it--which is (forgive me) understandable. The CM (or Computational Theory of Mind, CTM, if you feel like looking it up) is not in itself that hard to understand, but it relies on LOTS of work done before. And it's pretty clear you don't have that foundational knowledge--the Church-Turing Thesis, the concept of "computability" from mathematics, the notions of Multiple Realizability and functionalism from computer science and cognitive science. I doubt anyone here has time to give you a primer course on all of this. I keep hoping you'll get interested enough to look it up for yourself, but it seems you are stuck on your pet theory to the point you won't even entertain other possibilities.

If you said "you know, I've read the literature about CM, and I have a good grasp of what it says, and I'm just not convinced" then I would be thrilled to continue this debate. As it is, this isn't interesting any more.

There are competing theories out there. There are real criticisms of CM that are difficult to meet. But your criticism is not among them, because it essentially amounts to "Nuh-uh, that's wrong." You are merely contradicting the central conclusion of CM, but without finding a weakness in the logic or the premises upon which it rests. That's why so many of us are just stunned at the intellectual arrogance you exhibit when you argue against a theory you obviously do not understand and seem to have no interest in trying to understand.

Until you educate yourself about current AI research (bare neurology doesn't count), arguing with you is futile.
 

Back
Top Bottom