• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Has consciousness been fully explained?

Status
Not open for further replies.
Even if thermostats and computers were conscious (in some rudimentary sense), you/we would probably consider them not to be conscious anyway.

We consider things conscious according to their resemblance to ourselves - in appearance, or behaviour, or in the way they operate. It's not a stretch to consider the person sitting opposite us to be conscious. For a good while, we haven't considered the Moon to be conscious. Without compelling evidence that the mechanism and operation of the computer is comparable to that of a human mind, there's no reason to consider it conscious.
 
I don't think this is established at this point. If we knew what was different, we could include it in the definition of consciousness. At this point, all we can do is point at SRIP's that have consciousness (animals) and SRIP's that don't (thermostats and computers) and say that they are different.
That makes no sense. If you don't know what the difference is, how do you know there's a difference? How do you know that animals are conscious and computers aren't?
 
That is the general consensus at the moment. Do you consider them to be conscious? If so, why?

I don’t think they are conscious due to the general consensus. Ultimately I really don’t know. I’m simply pointing out that even if they were conscious, without us knowing, most of us would not accept them to be conscious – that’s the dilemma with using a simple “have/have-not” dichotomy regarding this issue.

I wonder what the minimal level of structural and behavioral complexity a system must display for us (the general populous), so that we can consider it to be conscious (or have conscious aspects)? Or, should we start to consider the issue in terms of scale rather than a matter of have/have-not altogether?
 
Okay; that means we're starting from the same position but you would include something more in the definition. That's perfectly reasonable.

What else would you include?

I think the definition needs to include the functions of consciousness. The problem is that that will make the definition complicated, but I'll start with an inadequate version that will need to be expanded.

"A SRIP system that functions as a self-aware, future anticipating, sensation producing, decision making program (for lack of a better word)."

I also think that there are levels of consciousness so that animals which lack some of the functionality that we have would still be considered conscious. And I don't think that there is any clear line between conscious and not conscious.


...and why?

Because it needs to be distinguished from other SRIPs.

And what is different about the self-referential information processing which leads to consciousness, compared to that which doesn't?

Layers of complexity that give the system added functionality. Photoshop and Hello World are both series of 1's and 0's, but Hello World is not able to edit bitmap graphics. Photoshop has significantly greater levels of complexity that give it novel functionalities.
 
I think the definition needs to include the functions of consciousness. The problem is that that will make the definition complicated, but I'll start with an inadequate version that will need to be expanded.

"A SRIP system that functions as a self-aware, future anticipating, sensation producing, decision making program (for lack of a better word)."
Okay.

All self-referencing information processing systems make decisions, have sensations, and are self-aware.

Anticipation, though, is not implicit in self-reference; that's something new. It's not complex in itself, but it is another layer on top of reference and self-reference.

I also think that there are levels of consciousness so that animals which lack some of the functionality that we have would still be considered conscious. And I don't think that there is any clear line between conscious and not conscious.
There's certainly a continuum of complexity in conscious systems. But I think it's perfectly valid to draw a line at the lower end, and say, if it doesn't at least have these features, it's not conscious. Is a bee conscious? Maybe, depends on the precise definition we settle on. Is a rock conscious? No. Any definition of consciousness that included rocks would be useless.

Because it needs to be distinguished from other SRIPs.
Well, I'd ask why again, but you've given me one example that I agree is valid - anticipation. We can argue about whether or not to include anticipation in our definition of consciousness, but it is something concrete, something that really happens in brains, something that could validly form part of such a definition.

In fact, I quite like it. It will need some careful thought, but this is exactly the type of non-handwaving response I've been asking for. :)

Layers of complexity that give the system added functionality. Photoshop and Hello World are both series of 1's and 0's, but Hello World is not able to edit bitmap graphics. Photoshop has significantly greater levels of complexity that give it novel functionalities.
Yes. And there is a lower bound of functionality required for any conscious system; I'm trying to establish that lower bound.
 
I don’t think they are conscious due to the general consensus. Ultimately I really don’t know. I’m simply pointing out that even if they were conscious, without us knowing, most of us would not accept them to be conscious – that’s the dilemma with using a simple “have/have-not” dichotomy regarding this issue.

I agree that it is tough to nail down a distinction, but we have an extremely good idea of exactly how thermostats and computers work, and they do not have any components that could give them the functionalities we associate with consciousness. It would have to be true that consciousness precedes brains (that it is a fundamental force of nature) in order for something like a thermostat to be conscious.

I wonder what the minimal level of structural and behavioral complexity a system must display for us (the general populous), so that we can consider it to be conscious (or have conscious aspects)? Or, should we start to consider the issue in terms of scale rather than a matter of have/have-not altogether?

Yes, I think a scale would be more appropriate, but I still would say that one end of the scale is entirely non conscious.
 
Okay.

All self-referencing information processing systems make decisions, have sensations, and are self-aware.
We agree on 'make decisions'. You have a ways to go to demonstrate to most people the 'have sensations' and 'are self-aware' parts for anything not currently accepted as 'being alive'.
 
Yes, I think a scale would be more appropriate, but I still would say that one end of the scale is entirely non conscious.

And we don't know whether consciousness is a matter of degree at all. It could be an either/or thing, with some things being entirely conscious, and others entirely not. Maybe animals aren't conscious, or babies.
 
Because of all the other human behaviours for which the simulation is not the same as the activity itself. Eating lunch. Dancing. Giving birth. Getting angry.

Why are those requisites for consciousness? Are you saying conscious beings *must* be able to eat lunch, dance, give birth, etc? And getting angry is obviously in the category of things that, when done in a simulation, are actually done.


Also because computers can't do these things. Until they can, we don't know if it's actually possible.

Blah blah blah of course, but that isn't your position, is it? Your position is that we should consider these things impossible until proven otherwise.

If that is not your position -- if you aren't leaning one way or the other -- then just say it, westprog. Cmon, just say it.
 
I find this to be the most profound point made in this entire thread.

Yet, it hasn't been brought up in a post since.

I wonder why?

Because it is very thought provoking and I have been mulling it over since I first read it. For me it explains one of the issues I have long had with the definition that Pixy uses, it would appear he was right and I was wrong.
 
I agree that it is tough to nail down a distinction, but we have an extremely good idea of exactly how thermostats and computers work, and they do not have any components that could give them the functionalities we associate with consciousness. It would have to be true that consciousness precedes brains (that it is a fundamental force of nature) in order for something like a thermostat to be conscious.

Yes, I think you’re right: There ought to be a minimal requirement, in regards to structural and functional complexity, for it to display the kind of behavioral complexity we’re looking for (i.e. what we would consider being a system displaying consciousness).
 
And we don't know whether consciousness is a matter of degree at all. It could be an either/or thing, with some things being entirely conscious, and others entirely not. Maybe animals aren't conscious, or babies.

Well, if we’re looking at the human system as a whole, then it seems to be a matter of degree from at least the following perspective: we know of processes happening in the body that never enter the “conscious arena” as such (numerous critical control functions that take place all the time). Yet we talk about the whole system as being conscious of course (but only aspects of it appear to be that way).

What are the things that in principle could be entirely conscious?
 
Why are those requisites for consciousness? Are you saying conscious beings *must* be able to eat lunch, dance, give birth, etc? And getting angry is obviously in the category of things that, when done in a simulation, are actually done.

We don't know what is necessary for consciousness. Until we do, we can't say that consciousness can be emulated by simulation, since there are many human behaviours which involve consciousness which are not equivalent to their simulation. Merely selecting a subset of such behaviours which a computer simulation might be able to do (though it currently cannot) does not demonstrate that simulation of human behaviour is necessarily the same as emulation.



Blah blah blah of course, but that isn't your position, is it? Your position is that we should consider these things impossible until proven otherwise.

If that is not your position -- if you aren't leaning one way or the other -- then just say it, westprog. Cmon, just say it.

Oh, you don't worm my double secret agenda out of me that easily. You'll just have to pretend that what I post is what I really mean.
 
I find this to be the most profound point made in this entire thread.
It might be if you could distinguish if consciousness was present in the machines performing the computations that produced symphonies or sonnets, or provided scientific analysis. It's likely machines without consciousness can be programmed by conscious humans to arrive at the results mentioned.
 
That makes no sense. If you don't know what the difference is, how do you know there's a difference? How do you know that animals are conscious and computers aren't?

I can identify different outcomes without necessarily being able to identify what is different about the processes that led to the different outcomes. Just because I can categorize things as conscious and non-conscious, it doesn't mean I can identify what is different about the self-referential information processing which leads to consciousness, compared to that which doesn't.
 
We agree on 'make decisions'.
Good!

You have a ways to go to demonstrate to most people the 'have sensations' and 'are self-aware' parts for anything not currently accepted as 'being alive'.
Well, no. I don't know about "most people", and don't care, but these points have been clearly established.

If these self-referencing information processing systems don't have sensations, what is it that they are acting and reporting on? What is it that leads to them making a decision one way rather than another?

As for self-aware, that's bleedin' obvious. They're self-referential, they process informtion, they can examine their own internal state and processes. That's what self-awareness is.
 
I think it's fair to say that science has a long way to go before it understands enough about consciousness to be able to have a view on what it may be.
 
Because of all the other human behaviours for which the simulation is not the same as the activity itself. Eating lunch. Dancing. Giving birth. Getting angry.
Some of these things are not like the others.

And why do you think that, for example, eating lunch is more similar to the process of consciousness than is the composition of a sonnet? That would seem, both prima facie and upon deep consideration, to be utterly absurd.

Also because computers can't do these things.
Why do you say that?

Until they can, we don't know if it's actually possible.
And why do you say that?

And, ultimately, why should we consider this objection anything other than a red herring thrown up to protect a failed worldview?
 
Okay.

All self-referencing information processing systems make decisions, have sensations, and are self-aware.

You're right about decision making (although the kind of decision making we typically associate with consciousness is significantly more complex than that of rudimentary SRIPs).

I disagree that all SRIPs have self-awareness, though. I actually can't think of any SRIP besides a brain that has a mechanism for self-awareness (I am not really sure that all brains even have such a mechanism). Unless you are equating self-aware with self-referential.

Sensation on the other hand is tricky. Yes, all SRIPs have some mechanism for taking in information from the environment, so maybe that could be said to be equivalent to a sense organ, but do all SRIPs feel the sensation? Take a Roomba for instance. If a Roomba could talk could it tell you what it feels like to sense and avoid objects in its environment? Is it like anything for it to sense and avoid objects in its environment? I don't think it is because I can't think of any mechanism that it possesses that would enable this. The Roomba just seems to go from sense to action without any need to experience and without anything like a neural network that could give rise to experiencing the sensation.

There's certainly a continuum of complexity in conscious systems. But I think it's perfectly valid to draw a line at the lower end, and say, if it doesn't at least have these features, it's not conscious.
Is a bee conscious? Maybe, depends on the precise definition we settle on. Is a rock conscious? No. Any definition of consciousness that included rocks would be useless.

Completely agree.

Well, I'd ask why again, but you've given me one example that I agree is valid - anticipation. We can argue about whether or not to include anticipation in our definition of consciousness, but it is something concrete, something that really happens in brains, something that could validly form part of such a definition.

In fact, I quite like it. It will need some careful thought, but this is exactly the type of non-handwaving response I've been asking for. :)

One out of four ain't bad.:) Let's see if I can't convince you of a few others.;)


Yes. And there is a lower bound of functionality required for any conscious system; I'm trying to establish that lower bound.

I think that's really tough to nail down. The line will undoubtedly be blurry, but I agree that it must be there even if it is not very precise.
 
Status
Not open for further replies.

Back
Top Bottom