• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Has consciousness been fully explained?

Status
Not open for further replies.
Of course, a simulation is ALSO physical, and since there's a good chance that, if consciousness is computation-like interaction, simulations, being computational interactions themselves, can be conscious (assuming it can interact with the outside world).


Would your simulation be aware that it is interacting with the outside world?

For what it's worth that is contrary to what rocketdodger claims. He equates the simulation being able to know it's a simulation with the sim possessing universal awareness... that (possible) entity being us unable to know whether we are a simulation/brain in a vat or existing in the external world.
 
... if the thing being conscious doesn't interact with the outside world in any way, how can it have any thoughts ?
Didn't we go through this earlier? The 'outside world' is what provides sensory input. If we're talking about a computational conscious entity, is there any reason - in principle - why, if it isn't feasible to provide direct I/O capabilities, a sufficient subset of 'outside world' sensory input could not be provided via computational means?
 
Didn't we go through this earlier? The 'outside world' is what provides sensory input. If we're talking about a computational conscious entity, is there any reason - in principle - why, if it isn't feasible to provide direct I/O capabilities, a sufficient subset of 'outside world' sensory input could not be provided via computational means?

I'm not following you.

What I meant is that all of your thoughts, and mind, relate to things experienced that are not us. We get data from "not us" and we can then think about it. I can't imagine how we could think about anything without some sort of input.

Mind you, this input may be part of a simulation, or entirely computational, as well. In any case we also need such an interaction in order to study the thing and determine whether it's conscious or not.
 
What I meant is that all of your thoughts, and mind, relate to things experienced that are not us. We get data from "not us" and we can then think about it. I can't imagine how we could think about anything without some sort of input.

Mind you, this input may be part of a simulation, or entirely computational, as well. In any case we also need such an interaction in order to study the thing and determine whether it's conscious or not.
Um, yes, this is pretty much what I was saying. If a conscious entity requires I/O from/to an environment external to itself, that I/O can be supplied, whether it is virtual (computational) or via an interface to the real world.
 
Hi, all.

Still discussing the simulations, eh?

Well, I've been reading my new cognitive neuroscience books, and although I'm not all the way through (by a long shot), so far the computationalist view -- as it has been presented here -- is mentioned only to point out its fatal flaws.

When I get some time, I'll have to post some choice excerpts.

Suffice it to say that absolutely no one agrees with Pixy Misa that the phenomenon of consciousness is currently understood, and there is no indication that anyone believes that such a behavior can be purely programmed (rather than built) into a machine, and no trace of the tortured interpretations of Church-Turing.

Regarding the potential consciousness of simulations, even Henry Markram, who is working with IBM to reverse-engineer the human brain in simulation, cautions that their project will "not... create a brain or artifical intelligence" but will only "represent the biological system".

I think it's important to note that the lead on a project that's attempting to simulate the brain at the neuronal level makes the same distinction that I and the other physicalists here have made -- they do not expect that even a perfect simulation of the brain would make the machine running the simulation conscious.

Here's Markram citing Jeff Hawkins, btw, on the state of the research: "'The main problem in computational neuroscience is that theoreticians [who] do not have a profound knowledge of the neuroscience build models of the brain.' Current models 'may capture some elements of biological realism, but are generally far from biological.' What the field needs... is 'computational neuroscientists willing to work closely with the neuroscientists to follow faithfully and learn from the biology.'"

According to Hawkins, "if we don't know how humans think, then we can't create a machine that can think like a human", which should, of course, be obvious.

And fooling people into thinking you've done it is not a proper benchmark, just as it would not be for any other scientific or engineering endeavor.

I also like Gazzaniga's reframing of the so-called "hard problem" of consciousness as an "explanatory gap": even if we were to discover tomorrow all of the neural corrolates of every possible state of human consciousness (NCCs), we'd still have a problem -- we would have no way of explaining why the NCC associated with, say, seeing a green light is correlated with that particular conscious experience and not some other conscious experience or none at all.

In fact, we don't even have a way of imagining the solution to that problem!

Obviously, there's some conceptual framework we haven't yet figured out.

Gazzaniga offers an analogy with trying to explain to a person in the ancient world how sound, light, and the ripples in a pond are physically similar. It makes sense to us because of our understanding of waves, but without that framework, it doesn't click. Understand the theoretical framework, and it makes sense, but without it, you're not going to see the connections.

Somewhere, there's a framework that will allow us to understand how and why a given NCC gives rise to a particular conscious experience, and not some other experience or no experience, but we haven't found it yet.

In any case, I'm finding no support at all for the claim that's been made here that a "computational model of consciousness" is the prevailing one... or for that matter that such a "model" even exists.
 
I also like Gazzaniga's reframing of the so-called "hard problem" of consciousness as an "explanatory gap": even if we were to discover tomorrow all of the neural corrolates of every possible state of human consciousness (NCCs), we'd still have a problem -- we would have no way of explaining why the NCC associated with, say, seeing a green light is correlated with that particular conscious experience and not some other conscious experience or none at all.

I'm not sure I want to get back into this, but I might as well try and keep up to speed
What do you mean by a neural correlate of consciousness?
 
I'm not sure I want to get back into this, but I might as well try and keep up to speed
What do you mean by a neural correlate of consciousness?

Subjectively speaking, it would be a neural configuration which corresponded to the experience of feeling hungry. Objectively speaking, it would be the neural configuration which corresponded to the physical state of needing food.
 
Subjectively speaking, it would be a neural configuration which corresponded to the experience of feeling hungry. Objectively speaking, it would be the neural configuration which corresponded to the physical state of needing food.

I'm still confused, since most posters seem to agree that consciousness isn't a unitary thing, and is certainly not static in time, then I'm not sure there would even be such a thing.

Secondly, if we did understand exactly which sequence of brain states led to every conceivable conscious feeling then of course we'd understand consciousness in exactly the same way we understand everything else. Or am I missing something?
 
Hi, all.

Still discussing the simulations, eh?

Well, I've been reading my new cognitive neuroscience books, and although I'm not all the way through (by a long shot), so far the computationalist view -- as it has been presented here -- is mentioned only to point out its fatal flaws.

When I get some time, I'll have to post some choice excerpts.

Suffice it to say that absolutely no one agrees with Pixy Misa that the phenomenon of consciousness is currently understood

I don't think Pixy believes that the phenomenon is understood, not completely, anyway. I think that his claim is that the basic mechanism is simple to describe.

Usually, the problem in consciousness discussions begins when philosophers try to inject some nonsense into the debate, in the form of "but consciousness is the BASIS of observsation", and so on.
 
I don't think Pixy believes that the phenomenon is understood, not completely, anyway. I think that his claim is that the basic mechanism is simple to describe.

I know that Pixy's claims are so outlandish that there's a tendency to assume that he must mean something different, but we can only go by what he says. If he's ever said that consciousness is not completely understood, I must have missed it.
 
In any case, I'm finding no support at all for the claim that's been made here that a "computational model of consciousness" is the prevailing one... or for that matter that such a "model" even exists.

Thats because you don't know what you are looking for.

The "computational model" is merely the idea that consciousness comes from the behavior of our neurons, and the behavior alone.

People call it the "compuational" model because it turns out the behavior of our neurons reduces to computation.

So yeah, everything you have been reading (that is credible) is support for the computational model.
 
Regarding the potential consciousness of simulations, even Henry Markram, who is working with IBM to reverse-engineer the human brain in simulation, cautions that their project will "not... create a brain or artifical intelligence" but will only "represent the biological system".

I could simulate the physical characteristics of an automobile in order to study the effects of impact on new designs. But my simulation would give no indication of how the car drives. Similarly, it sounds as though the IBM project is not *meant* to create AI, so it's not surprising that it won't. In any event, Markram's comments about *this* particular simulation say nothing about the possibility of creating a thinking machine.

I think it's important to note that the lead on a project that's attempting to simulate the brain at the neuronal level makes the same distinction that I and the other physicalists here have made -- they do not expect that even a perfect simulation of the brain would make the machine running the simulation conscious.

It may say that elsewhere in the book you are reading, but this quote you've posted says nothing of the sort. It only says that the simulation IBM is working on will not create consciousness.

Here's Markram citing Jeff Hawkins, btw, on the state of the research: "'The main problem in computational neuroscience is that theoreticians [who] do not have a profound knowledge of the neuroscience build models of the brain.' Current models 'may capture some elements of biological realism, but are generally far from biological.' What the field needs... is 'computational neuroscientists willing to work closely with the neuroscientists to follow faithfully and learn from the biology.'"

According to Hawkins, "if we don't know how humans think, then we can't create a machine that can think like a human", which should, of course, be obvious.

This is all perfectly reasonable and not in the least surprising nor damning to the case of computationalists.

I also like Gazzaniga's reframing of the so-called "hard problem" of consciousness as an "explanatory gap": even if we were to discover tomorrow all of the neural corrolates of every possible state of human consciousness (NCCs), we'd still have a problem -- we would have no way of explaining why the NCC associated with, say, seeing a green light is correlated with that particular conscious experience and not some other conscious experience or none at all.

In fact, we don't even have a way of imagining the solution to that problem!

Gazzaniga isn't adding anything new to the discussion. It's always been an explanatory gap. My personal take on it is that the gap may remain forever more. I can live with that. To me, that gap doesn't change the possibility of machine consciousness.

Obviously, there's some conceptual framework we haven't yet figured out.

Gazzaniga offers an analogy with trying to explain to a person in the ancient world how sound, light, and the ripples in a pond are physically similar. It makes sense to us because of our understanding of waves, but without that framework, it doesn't click. Understand the theoretical framework, and it makes sense, but without it, you're not going to see the connections.

Somewhere, there's a framework that will allow us to understand how and why a given NCC gives rise to a particular conscious experience, and not some other experience or no experience, but we haven't found it yet.

But this analogy holds for *any* phenomenon we don't understand. What's it adding to the discussion?

In any case, I'm finding no support at all for the claim that's been made here that a "computational model of consciousness" is the prevailing one... or for that matter that such a "model" even exists.

Not that it's the prevailing model, but there *is* a model:
http://en.wikipedia.org/wiki/Computational_theory_of_mind
 
from wiki

The computational theory of mind is a philosophical concept that the mind functions as a computer or symbol manipulator.
And note we are in R & P, not science. :)
 
Status
Not open for further replies.

Back
Top Bottom