• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Has consciousness been fully explained?

Status
Not open for further replies.
How would you feel about the several varieties of computational creativity that have been explored? Can you conceive of a computational system that can combine such approaches to produce a level of creativity comparable with an invertebrate? or small mammal, e.g. a mouse? or a larger mammal? where would you draw the line?

If the 'creativity' in question is algorithmically based the act of creativity stops at the creation of said algorithm. Anything produced by such a process is no more creative than calculating pi to some arbitrary digit. Sure, it may not be generally known what the Xth digit of pi is before the calculation is carried out but the information provided is not 'created'; it's simply 'revealed' from what is already encoded in the initial state of the calculation process.


What kind of pro-active behaviour do bacteria show, that is not 'programmed into' them? Do you feel we could not code a system that would show all the various behaviours of a bacterium (including 'pro-active') - without explicitly programming those behaviours?

The explicit details of it's motile trajectory thru whatever medium it happens to be living in would be one example. The genes of a bacterium -- like any other organism -- simply provide the templates coding for particular proteins; when and how those genes are expressed are determined by factors beyond the genome itself.

Of course, being relatively simple creatures it should be easier in principle for us to predict their general behavior more easily than, say a mouse. Even so, they will probably exhibit a level of indeterminacy in their behavior that is greater than an inanimate system of comparable size.
 
If the 'creativity' in question is algorithmically based the act of creativity stops at the creation of said algorithm. Anything produced by such a process is no more creative than calculating pi to some arbitrary digit. Sure, it may not be generally known what the Xth digit of pi is before the calculation is carried out but the information provided is not 'created'; it's simply 'revealed' from what is already encoded in the initial state of the calculation process.
Consider an algorithm that uses chaotic functions to generate its creative results from real-world input data. Tiny variations in the initial conditions (data) would produce very different results.

Some promising hypotheses about the means by which the brain generates its output (esp. creativity) involve chaotic behaviour, particularly as it has already been established that chaotic behaviour is a feature of brain function on various levels.

The explicit details of it's motile trajectory thru whatever medium it happens to be living in would be one example.
The explicit details of its path through the medium are not details of its behaviour - its path is the result of its behaviours interacting with the environment. Those behaviours are relatively simple, despite resulting in complex results such as some particular path. An analogy would be the simple rules followed by birds in a flock or fish in a shoal, which produce complex and apparently sophisticated coordinated group movements.

The genes of a bacterium -- like any other organism -- simply provide the templates coding for particular proteins; when and how those genes are expressed are determined by factors beyond the genome itself.
Well sure, the environment play a big role in which genes are expressed and to what degree. Do you think we couldn't emulate that computationally?
 
But in all seriousness, I'm not counting intellectual capacity in evaluating how "bright" or "dim" a conscious entity is. There are small children that have a 'brighter' consciousness than many PhDs.
I'm getting the picture now: your AMM-consciousness is *your* subjective rating of the person, a sort of cut-of-their-jib-ness.

Not, really. Our machines play a completely passive role throughout the entire process of replication. As of now, they do not replicate; they are replicated.
From your point of view, yes.

How high would you rate the ability to see things from another's viewpoint, especially of those with quite different world views?
 
I would question whether that necessarily requires consciousness. Though perhaps it can play a role in determining the contextual appropriateness...


That's where it definitely needs help. There are only certain types of behavioral impulses, such as perceptions, feelings, and decisions that are part of consciousness.

Other kinds, such as motivations, can be completely subconscious.



There must be some sort of criterion for conscious evaluation. How about novelty, threat, food, or sex?
 
Last edited:
Consider an algorithm that uses chaotic functions to generate its creative results from real-world input data. Tiny variations in the initial conditions (data) would produce very different results.

Some promising hypotheses about the means by which the brain generates its output (esp. creativity) involve chaotic behaviour, particularly as it has already been established that chaotic behaviour is a feature of brain function on various levels.

Hmm...Thats interesting. If thats the kinda of creativity you mean then I can definitely see it working to generate novel stuff. I'm guessing that biological consciousness pulls off something similar by producing outputs based upon subjective factors like feelings and such.

Do you think that those algorithmically based creative programs will be able to consistently produce results that are meaningful or evocative for humans? I think it would be better to just produce a system that can support consciousness and then train it to be able to communicate it's experiences to us. The computational aspects are something we've already got a pretty good grasp on -- its that subjective stuff where we are still kinda lost.


The explicit details of its path through the medium are not details of its behaviour - its path is the result of its behaviours interacting with the environment. Those behaviours are relatively simple, despite resulting in complex results such as some particular path. An analogy would be the simple rules followed by birds in a flock or fish in a shoal, which produce complex and apparently sophisticated coordinated group movements.

I'd say that the environment of an organism includes its 'interior' subjective interpretation as much as the 'external' objective factors -- if not more so in some instances. Of course, assuming that a bacterium even has such an 'interior' life, its likely to be relatively simple compared to multicellular organisms possessing nervous systems. Even so, I think such factors must be taken into account in our scientific understanding of how life operates. We need a model of how psychology relates to the physics of biology. I really think that finding the relationship between qualia and quanta is the best way to go about doing this.

Well sure, the environment play a big role in which genes are expressed and to what degree. Do you think we couldn't emulate that computationally?

Well I'm under the impression that our goal is to understand actual consciousness well enough to reproduce it synthetically rather than simply simulate byproducts of it. We need a solid scientific theory of consciousness that meets the criteria I posted some time ago. If we don't have such theory that can adequately answer those questions then we do not have a scientific theory of consciousness sufficient for us to speak of simulating it.
 
Last edited:
I'm getting the picture now: your AMM-consciousness is *your* subjective rating of the person, a sort of cut-of-their-jib-ness.

I reckon so. I'm actually defining a 'person' *AS* their consciousness; everything else is just appearance ;)


From your point of view, yes.

How high would you rate the ability to see things from another's viewpoint, especially of those with quite different world views?

When our machines start having their own point of view, thats when I'll consider them conscious :D
 
dlorde said:
... You feel that the 'illusion of consciousness' is the illusion that we can control ourselves via our conscious thoughts. You don't feel that is actually what happens. Have I understood you correctly?
Pretty much, yes - although 'conscious thoughts' is not how I'd put it. Our conscious awareness is not as volitional as it seems, it is more a reflective extension or enhancement of the continuous sense of identity provided by long-term memory.

I think you are saying that consciousness is only the visible portion of the iceberg? If so, I think we are in agreement there.
Why do you think that consciousness is not the ‘executive control system’?
As Pixy suggests, there is evidence that points that way Libet's experiments (I've seen other supporting evidence - don't have links at present).
According to the link: Libet's results thus cannot be interpreted to provide empirical evidence in favour of agency reductionism, since non-reductionist theories, even including dualist interactionism, would predict the very same experimental results.
Well, we can always compare the behavior of conscious individuals with unconscious individuals. I think that gives us some indication of the influence of consciousness.
I think Pixy covered this.
PixyMisa said:
That certainly gives you an indication of the trouble you get into if you don't define your terms consistently. An unconscious person is not just someone with their self-awareness turned off - a whole lot of sensory and motor processing, that we would consider unconscious, is also turned off.

What processes are you referring to? I am not speaking about simple self-awareness. I'm talking about the difference between a person who is awake and one who is asleep or in a coma. I do assume that the processes that are always turned off when someone is unconscious are part of consciousness. Perhaps you could delineate what you mean by conscious and unconscious in regards to the processes that shut down when an animal is unconscious and why you feel that some of those processes should not be considered part of consciousness.
 
That's where it definitely needs help. There are only certain types of behavioral impulses, such as perceptions, feelings, and decisions that are part of consciousness.

Other kinds, such as motivations, can be completely subconscious.

But thats just the thing. An individual can explore their own subconscious -- theres no sharp line separating the conscious mind from the subconscious contents.

Consciousness just refers to the areas of one's psyche that are 'lit up' by their awareness. Thru deep introspection one can begin to explore the 'darker' areas by turning their awareness more inward :)
 
When our machines start having their own point of view, thats when I'll consider them conscious :D
How do you know they don't already have one? Stretch your imagination. We would have a tough time designing and debugging systems if we couldn't put ourselves in their "shoes".
 
How do you know they don't already have one? Stretch your imagination. We would have a tough time designing and debugging systems if we couldn't put ourselves in their "shoes".

Thats one thing I always wondered about. Assuming they do have some kind of subjective perspective (hehe) does what they experience in anyway correspond to what we input into them? Like in your example where you input '6' into your computer and got a '4', would the computer perceive them as numbers or do they just feel like vague blips or something entirely different?

I think that if they somehow already have some subjectivity what they experience probably wouldn't resemble what we interpret their inputs and outputs to be :-X
 
Last edited:
Thats one thing I always wondered about. Assuming they do have some kind of subjective perspective (hehe) does what they experience in anyway correspond to what we input into them? Like in your example where you input '6' into your computer and got a '4', would the computer perceive them as numbers or do they just feel like vague blips or something entirely different?

I think that if they somehow already have some subjectivity what they experience probably wouldn't resemble what we interpret their inputs and outputs to be :-X

A good question (and thanks for asking it-- it gets me thinking about this in more detail!)

First consider what perceiving a '4' means you: a whole constellation of learned recall and motor behaviors, one of which may be selected by an association with the current context. For example, if you have just asked someone the age of their kid, '4' would likely cause you to recall a few characteristics of four-year-olds. That would be a quite different "feel" of it to you than if you were copying down a phone number and the next digit was a '4'.

Likewise for a computer, context must determine its "feel". In order to get a better idea of what one feel might be like, I would look for particular meanings that we can share (to some extent) with a computer.

One that comes to mind is simple addition of two numbers. A child will start out not understanding what the teacher means when she asks him to add two numbers. The teacher would determine this by asking the child to add the numbers and not getting the expected response.

The teacher can then teach the child the procedure to follow and again ask the child to add the numbers and check the result. At some point the teacher will be satisfied that the child shares her understanding of the meaning of the command "add".

Now the teacher can do the same with a computer: teach it the steps required so that it responds with the same correct answers the child did. Though the language used would be different, it would have corresponding features. At this point I would say the computer also understands the meaning of the "add" command to the same degree the child does.

Now we can ask how this "add" command feels to the child and to the computer. Though they will have many other differences it would induce the same core behavior in both of them. So we must say there's a certain amount of resemblance.
 
But thats just the thing. An individual can explore their own subconscious -- theres no sharp line separating the conscious mind from the subconscious contents.

Consciousness just refers to the areas of one's psyche that are 'lit up' by their awareness. Thru deep introspection one can begin to explore the 'darker' areas by turning their awareness more inward :)


An interesting issue, but I think that refers more to becoming conscious of something or other, not what consciousness is. There has to be a reason why we look at subconscious motivations, and that reason would be that they have risen to a threshold so that they might push behavior in some way. Directing attention to the subconscious requires a behavioral push so that attention can be pointed in that direction in the first place.

In other words I think you are referring more to the contents of awareness than what awareness might be.


While there is overlap amongst all of these usages, we use the word consciousness to refer to several different processes including awakeness, alertness, awareness, and experiencing. I was working on experiencing, which obviously overlaps with but is not the same as awareness.
 
Do you think that those algorithmically based creative programs will be able to consistently produce results that are meaningful or evocative for humans?
I don't see why not - if the creativity 'engine' can integrate disparate areas to generate a selection of novel 'ideas' or hypotheses that have a logical structure, they can be assessed and filtered at a more abstract level for real-world meaning (as in filtering out nonsense). We can't expect to match the brain for creativity, but I don't see why we couldn't produce reasonable results. Human creativity itself has a tendency to produce a lot of rubbish with few gems, so the success to failure ratio of the results is not critical.

I think it would be better to just produce a system that can support consciousness and then train it to be able to communicate it's experiences to us. The computational aspects are something we've already got a pretty good grasp on -- its that subjective stuff where we are still kinda lost.
If by consciousness you mean conscious self-awareness, I honestly don't think it's structurally/physically that special. I don't really know quite what you mean by difficulty with 'subjective stuff'.

As far as producing a system to support consciousness, it seems to me that as you assemble the various subsystems to create a functioning brain-like system, and get them communicating with each other in appropriate ways, it will become easier to see where the self-referential part(s) that may provide self-awareness can be integrated. It's a very recent development in evolutionary terms, achieved more through enhancement and refinement than through novel structure, so there appears to be nothing extraordinary about it.

I really think that finding the relationship between qualia and quanta is the best way to go about doing this.
Why do you think there might be any useful relationship between qualia and quanta? a quantum is a minimum physical quantity and qualia a debatable metaphysical concept...:confused:
 
According to the link: Libet's results thus cannot be interpreted to provide empirical evidence in favour of agency reductionism, since non-reductionist theories, even including dualist interactionism, would predict the very same experimental results.
Not sure about that criticism - I discount dualism, and see Libet's results as indications that subconscious processes rather than conscious ones drive volition (or at very least do not contradict that). There are other examples indicating temporal compensation to bind events to awareness, such as this. The ubiquity of confabulation being discovered in various fields (e.g. marketing research ...) suggests to me that the narrative generator with its confabulatory facility may well be fundamental to conscious awareness. I have read research where students were given subconscious hints to solve puzzles in various situations, and when asked to explain how they arrived at the answers, almost all confabulated more-or-less plausible explanations that were demonstrably untrue or actually impossible in the circumstances. Unfortunately, I can't find details of it. I'll keep looking.

What processes are you referring to? I am not speaking about simple self-awareness. I'm talking about the difference between a person who is awake and one who is asleep or in a coma. I do assume that the processes that are always turned off when someone is unconscious are part of consciousness.
OK, I think we were at cross-purposes to some degree. If you assume that the processes that are always turned off when someone is unconscious are part of consciousness, then I think you may be including too much in your definition of consciousness. My point was that consciousness requires the processes that are always turned off when someone is unconscious, but that those processes are not necessarily part of consciousness. Also, there is a problem with the definition and identification of unconsciousness. It's generally characterised by a limp and unresponsive individual, but one can be conscious while limp and unresponsive (e.g. paralysis). I often feel I'm conscious while asleep (in ordinary dreams or lucid dreams). It may be a limited form of consciousness, but where does it fit in? Is a sleepwalker conscious?

The problem seems to be that we don't have the sufficiently precise terminology - consciousness, unconsciousness, awareness, self, all seem to have multiple common context-dependent meanings and usages, quite apart from our own personal interpretations and usage.
 
...
Now the teacher can do the same with a computer: teach it the steps required so that it responds with the same correct answers the child did. Though the language used would be different, it would have corresponding features. At this point I would say the computer also understands the meaning of the "add" command to the same degree the child does.
This would appear to require a learning algorithm, to allow the teacher to teach and the computer to learn (no problem, there are such algorithms, but just clarifying).

Once the steps to add were learnt, would the computer know that it understood the add procedure? Would it know it had learnt it? I suggest it would require some form of reflective introspection for this. I did see a video of a computer that could manipulate geometric blocks and explain what it had done. It could also verbalise the structure of the blocks in its environment (e.g. "the red pyramid is on top of the blue block, next to the green pyramid", etc).
 
Now we can ask how this "add" command feels to the child and to the computer. Though they will have many other differences it would induce the same core behavior in both of them. So we must say there's a certain amount of resemblance.

I'm intensely curious to understand what it would be like to be a computer, or some other non-human entity. Its just kinda frustrating because right now our understanding of consciousness is such that we cannot even definitively say if such-n-such a thing is conscious, what physically makes it conscious, and if so -- in what way?

Then, when thinking about it, it opens up other questions for me like...

...Okay, so an individual is conscious. Is it meaningful to speak of it being localized in some way? What does a thought "look like"? Is a feeling something externally identifiable regardless of the medium it works thru? So many questions! >_<
 
Last edited:
An interesting issue, but I think that refers more to becoming conscious of something or other, not what consciousness is. There has to be a reason why we look at subconscious motivations, and that reason would be that they have risen to a threshold so that they might push behavior in some way. Directing attention to the subconscious requires a behavioral push so that attention can be pointed in that direction in the first place.

In other words I think you are referring more to the contents of awareness than what awareness might be.


While there is overlap amongst all of these usages, we use the word consciousness to refer to several different processes including awakeness, alertness, awareness, and experiencing. I was working on experiencing, which obviously overlaps with but is not the same as awareness.

I kinda cobbled together a conceptual scheme where 'awareness' is defined as the focus/extent of conscious activity in a mental space, and 'experiencing' as what goes on within one's awareness. The nature of that experience depends upon on the quality of one's consciousness and the makeup of the mental objects it 'passes thru'. [I suppose 'awakeness' and 'alertness' would kinda be subsumed under the label of 'lucidity' in my scheme but I suppose theres definitely some room for refining it :)]

To be sure, there's a lot of content and activity in one's mind that lies outside of one's awareness. However, what I'm curious about is what those mental elements are in physical terms [for instance, what is the physical identity of a meme?] and what is the physical identity of the consciousness that experiences those mental objects [i.e. the qualia part of the whole equation]. What I'm really in wondering about is whether or not those entities can be described in terms of the physical model we already have or if we'll have to extend the model to accommodate them.
 
Last edited:
I don't see why not - if the creativity 'engine' can integrate disparate areas to generate a selection of novel 'ideas' or hypotheses that have a logical structure, they can be assessed and filtered at a more abstract level for real-world meaning (as in filtering out nonsense). We can't expect to match the brain for creativity, but I don't see why we couldn't produce reasonable results. Human creativity itself has a tendency to produce a lot of rubbish with few gems, so the success to failure ratio of the results is not critical.

I'll suspend judgment on this one till I get a better sense of the topic.


If by consciousness you mean conscious self-awareness, I honestly don't think it's structurally/physically that special. I don't really know quite what you mean by difficulty with 'subjective stuff'.

When I speak of consciousness or 'subjective stuff' I don't necessarily mean reflexive self-awareness. I mean simply the raw experience of anything -- just what the heck is it? Doesn't that question even give you any pause?

As far as producing a system to support consciousness, it seems to me that as you assemble the various subsystems to create a functioning brain-like system, and get them communicating with each other in appropriate ways, it will become easier to see where the self-referential part(s) that may provide self-awareness can be integrated. It's a very recent development in evolutionary terms, achieved more through enhancement and refinement than through novel structure, so there appears to be nothing extraordinary about it.

Thats just what I'm talkin' about. I hear a lot of talk in these discussions about the computational architecture of the brain, but whenever the topic of the subjective aspect of the whole enterprise is broached theres just a few head-scratches and someone tries to sweep the issue under the rug. Knowing the functional details of the brain are all well n' good but it tells of nothing of the physics of conscious experience IAOI.


Why do you think there might be any useful relationship between qualia and quanta? a quantum is a minimum physical quantity and qualia a debatable metaphysical concept...:confused:

Because the 'qualia' I'm referring to aren't just some metaphysical abstraction but a label for something I live every moment of my waking (and dreaming) life. They are the raw 'stuff' our experiences are composed of. They are what all our scientific observations are 'made of'. They are whats indubitably real beyond all doubt. If we can't fully and meaningfully integrate them with our physical model of whats 'outside' [and I don't mean simply a hand-waving assumption that its just in the model somewhere] then we have a huge scientific as well as philosophical problem on our hands.
 
Last edited:
This would appear to require a learning algorithm, to allow the teacher to teach and the computer to learn (no problem, there are such algorithms, but just clarifying).

Once the steps to add were learnt, would the computer know that it understood the add procedure? Would it know it had learnt it? I suggest it would require some form of reflective introspection for this. I did see a video of a computer that could manipulate geometric blocks and explain what it had done. It could also verbalise the structure of the blocks in its environment (e.g. "the red pyramid is on top of the blue block, next to the green pyramid", etc).
This reflective introspection ability is in many higher-level computer languages. Your example appears to be from the classic SHRDLU program-- its claim to fame is understanding natural language. I see that as useful in many contexts but usually quite cumbersome. And certainly not fundamental, as long as desired meanings are being communicated.
 
Status
Not open for further replies.

Back
Top Bottom