Explain consciousness to the layman.

Status
Not open for further replies.
I once concocted what we can call "Mr. Scott's three laws of computability." They are summarized as follows:

Any computer (the standard model, like the one I'm typing this on, that computer science can distill to the universal Turing Machine) can do ANY TASK if:

1) Programmers have the will and talent to program the task.
2) There is enough time and memory to execute the program.
3) Appropriate input/output devices are supplied.

When asked if a computer could be conscious, the consensus of our software team was that it was an output device problem. This intuition, which I'm practically certain is incorrect, is where dualist assumptions come from. There's something about the internal experience of consciousness that makes us feel it's an output function of the brain, like sugar is an output function of chloroplasts. There isn't the tiniest shred of evidence that this is the case. The dualist conclusion merely speaks to the power of the internal subjective experience. Consciousness is not an output in and of itself. It's a process.

PS: When I asked, "Can we make a computer that would die, then be reborn as a beautiful butterfly?" it immediately invoked the obvious answer, "output device issue."
 
I think that's the essence of what makes us wonder about consciousness. I doubt that lower mammals do this, but I think they are nevertheless unambiguously conscious.
Sorry, I think I misread you - where you said include the conscious process I was thinking include the mental process, which you covered in point 3 of the original group.
 
I once concocted what we can call "Mr. Scott's three laws of computability." They are summarized as follows:

Any computer (the standard model, like the one I'm typing this on, that computer science can distill to the universal Turing Machine) can do ANY TASK if:

1) Programmers have the will and talent to program the task.
2) There is enough time and memory to execute the program.
3) Appropriate input/output devices are supplied.

When asked if a computer could be conscious, the consensus of our software team was that it was an output device problem. This intuition, which I'm practically certain is incorrect, is where dualist assumptions come from. There's something about the internal experience of consciousness that makes us feel it's an output function of the brain, like sugar is an output function of chloroplasts. There isn't the tiniest shred of evidence that this is the case. The dualist conclusion merely speaks to the power of the internal subjective experience. Consciousness is not an output in and of itself. It's a process.

PS: When I asked, "Can we make a computer that would die, then be reborn as a beautiful butterfly?" it immediately invoked the obvious answer, "output device issue."

I would not regard the about intuition as being without the "tiniest shred of evidence". The brain is not a device that cogitates over a set of data, and then returns an answer when it has been processed. It is tightly coupled to the nervous system, and its primary role is in responding to stimuli. Timing and output are not something bolted onto the Turing machine functionality - they are the fundamental role of the brain.

I don't think that there is anything remotely dualist in considering the brain as a monitoring and control device rather than a data processing device. A computer that performs any kind of real time interactive function cannot be distilled into the UTM. The UTM is a closed system, it is deterministic, and its functionality is not effected by timing issues.
 
Last edited:
When I was concocting 1-2-3, I was wondering if #4, anticipating the future, was an essential feature of consciousness. I concluded it was not, but it's certainly an essential feature of survival, and an essential feature of the brain, but that's a different question.

The question your point invokes: If we can't picture the future, are we not conscious?

Sorry, you misunderstand or I didn't explain it properly.

Of course we can anticipate/picture the future, that isn't a question.

I am speaking of "experiencing" the future. In other words, as world time goes from t -1 to t, simultaneously being aware of a change in input from t - 1 to t.

This is impossible according to computer science, and indeed all science -- by the time information about world state t reaches whatever systems the mind is composed of, world time has already advanced past t -- but many people don't grasp this.

In other words, logic dictates that if our mind is physical we are always conscious only of the past with respect to the world around us. We can predict the future, and think about the future, but we can never be actually aware of the future -- only the past.

I was wondering if you had mediated on this. When you think about it, it is kind of common sense. Since the future hasn't happened yet, it is impossible to be aware of it. However one implication of this fact is that it would be fine to have a copy-destroy cycle because it is logically equivalent to normal existence, and many people don't agree with the triviality of a copy-destroy cycle.
 
I would not regard the about intuition as being without the "tiniest shred of evidence". The brain is not a device that cogitates over a set of data, and then returns an answer when it has been processed. It is tightly coupled to the nervous system, and its primary role is in responding to stimuli. Timing and output are not something bolted onto the Turing machine functionality - they are the fundamental role of the brain.

I don't think that there is anything remotely dualist in considering the brain as a monitoring and control device rather than a data processing device. A computer that performs any kind of real time interactive function cannot be distilled into the UTM. The UTM is a closed system, it is deterministic, and its functionality is not effected by timing issues.

The gaping hole in your theory is that people can close their eyes and still be conscious.

Try again?
 
Did you ask how/why they came up with that answer?

I had some woo-woo ideas at the time so I was a dualist and agreed that consciousness needed some mysterious output device.
 
Last edited:
Thanks!

Sure, and module may fire randomly for no meaningful reason, but that's not what I mean by a random module. I mean a module that provides a miscellaneous embellishment that we might mistakenly consider to be essential to consciousness. For example, a module who's* job causes boredom appropriately, not randomly.

*Intentionally playful anthropomorphism.



When you hypothesize such things always keep in mind EVOLUTION.


So how would such "embellishment modules" come to evolve?

Now we might speculate that modules that serve a primary evolutionary function may also have a side-effect of "embellishing” but I doubt there are modules that are specifically for "embellishment".
 
I would not regard the about intuition as being without the "tiniest shred of evidence". The brain is not a device that cogitates over a set of data, and then returns an answer when it has been processed. It is tightly coupled to the nervous system, and its primary role is in responding to stimuli. Timing and output are not something bolted onto the Turing machine functionality - they are the fundamental role of the brain.
Which is entirely irrelevant to whether brain function is computable.

I don't think that there is anything remotely dualist in considering the brain as a monitoring and control device rather than a data processing device. A computer that performs any kind of real time interactive function cannot be distilled into the UTM. The UTM is a closed system, it is deterministic, and its functionality is not effected by timing issues.
You are, as always, confusing the model with its implementation. I don't know whether you are doing this deliberately or if you really don't understand that you are posting abject nonsense, but the end result is the same.
 
I had some woo-woo ideas at the time so I was a dualist and agreed that consciousness needed some mysterious output device.
Right. I've had that argument here many a time. When you try to get someone to put their finger on what it is that's required, you either get an answer that's just another level of woo ("we know there's something special going on because psychic powers duh") or a lot of squirming.
 
When I was concocting 1-2-3, I was wondering if #4, anticipating the future, was an essential feature of consciousness. I concluded it was not, but it's certainly an essential feature of survival, and an essential feature of the brain, but that's a different question.

But is anticipating the future an essential feature of survival? It doesn't matter if you have an actual expectation - just have the right sorts of reactions to stimuli. An organism that didn't have the right sorts of reactions wouldn't live to reproduce.
 
But is anticipating the future an essential feature of survival? It doesn't matter if you have an actual expectation - just have the right sorts of reactions to stimuli. An organism that didn't have the right sorts of reactions wouldn't live to reproduce.

There's a very good argument to be made that it's an essential feature of thought.
 
There's a very good argument to be made that it's an essential feature of thought.

I've always thought that the reason for the terrible twos is that children have learned to anticipate the future, and when it doesn't match up for some reason, they experience a feeling of dislocation and confusion.
 
Right. I've had that argument here many a time. When you try to get someone to put their finger on what it is that's required, you either get an answer that's just another level of woo ("we know there's something special going on because psychic powers duh") or a lot of squirming.

Looks like this is a popular argument.

We know consciousness is special because we can't understand it and we can't understand it because it's special.
 
I've always thought that the reason for the terrible twos is that children have learned to anticipate the future, and when it doesn't match up for some reason, they experience a feeling of dislocation and confusion.

In my experience the terrible twos is an expression of the frustration at not yet having the ability to talk, while being able to understand what people are saying all around you.

I've observed this at close quarters at least ten times now.
 
Right. I've had that argument here many a time. When you try to get someone to put their finger on what it is that's required, you either get an answer that's just another level of woo ("we know there's something special going on because psychic powers duh") or a lot of squirming.

You do a little squirming yourself on occasion;)

What's "special" about consciousness as experienced by humanity is that it is a product of biological life.

Do your algorithms factor in biology in the hardware?
 
You do a little squirming yourself on occasion;)

What's "special" about consciousness as experienced by humanity is that it is a product of biological life.

Do your algorithms factor in biology in the hardware?

That's some "special" special pleading.


You might want to look up what an algorithm is.
 
You do a little squirming yourself on occasion;)

What's "special" about consciousness as experienced by humanity is that it is a product of biological life.

Do your algorithms factor in biology in the hardware?

What's "special" about running as experienced by humanity is that it is a product of biological life.
 
Status
Not open for further replies.

Back
Top Bottom