Explain consciousness to the layman.

Status
Not open for further replies.
Remember when I said you're having trouble following the thread ? Maybe this is one of those times, no ? ;)

Yes, maybe it is. However, I note that your post is just another one of many that avoids actually demonstrating that consciousness is computational, and decides that the problem is mine for not paying attention.

Since the computational nature of consciousness is a central premise, one might hope that a permanent link to a FAQ might be in order.
 
Yes, maybe it is. However, I note that your post is just another one of many that avoids actually demonstrating that consciousness is computational, and decides that the problem is mine for not paying attention.

Oh, sorry. I'll let other posters with better knowledge of consciousness do that for you, while I point out that since the possibility I just mentionned happens regularily, I can provisionally accept it as the cause of your claim that it hasn't been demonstrated.
 
Maybe Dina Goidin's article Computation Beyond Turing Machines ( http://oldblog.computationalcomplexity.org/media/Wegner-Goldin.pdf ) can be used for better understanding of consciousness in terms of computation:

"Though mathematics was adopted as a goal for modeling computers in the 1960s by analogy with models of physics, Gödel had shown in 1931 that logic cannot model mathematics [Go31] and Turing showed that neither logic nor algorithms can completely model computing and human thought.

In addition to interaction, other ways to extend computation beyond Turing machines have been considered, such as computing with real numbers.

However, the assumption that all of computation can be algorithmically specified is still widely accepted. Interaction machines have been criticized as an unnecessary Kuhnian paradigm shift. But Gödel, Church, Turing, and more recently Milner, Wegner and Van Leeuwen have argued that this is not the case."


Also please look at Dina Goldin and Peter Wegner work The Interactive Nature of Computing:
Refuting the Strong Church-Turing Thesis
. Here is a quote taken from this work:

"According to the interactive view of computation, interaction (com-
munication with the outside world) happens during the computation,
not before or after it. Hence, computation is an ongoing process rather
than a function-based transformation of an input to an output. The
interactive approach represents a paradigm shift that redefines the
nature of computer science, by changing our understanding of what
computation is and how it is modeled. This view of computation is not
modeled by TMs, which capture only the computation of functions;
alternative models are needed."

But this is just more nonsense of the type westprog is selling.

Look -- the tape of a Turing machine is infinite.

If you want to have a "ongoing process" it just means that the input is huge, and the output is huge, and you are looking at the steps of the algorithm as the algorithm is carried out rather than after it is complete.

For example, it is easy to model a robot using an ideal turing machine -- you just include the entire environment as part of the input. If the robot never reaches parts of the environment, then the machine ( or rather, algorithm ) will never touch those parts of the tape. Not all input needs to be evaluated to give a certain output.

Suggesting otherwise is just an utter failure to understand the nature of reality.

Furthermore, the suggestion that people actually think in terms of idealized TM tapes is just absurd. Everyone that programs thinks in terms of an "ongoing process" because nearly all of computation is so heavily based on continuous input these days. That doesn't invalidate the established fact that ALL computation CAN be reduced to a series of instructions on the tape of an idealized turing machine.
 
However, it seems clear that computation and robotic control are different in what they do.

To you, perhaps.

But I would remind you that a neanderthal looking at a fox and a tree would reach the conclusion that they are quite different in what they do.

Fast forward a few thousand years, and it is quite clear to modern humans ( well, some of us ) that since both are made of living cells, that all exhibit identical fundamental behavior, a fox and a tree are actually not very different in what they do.

So if we are talking about what "seems clear" I would say it "seems clear" that the level of understanding one has regarding "computation" could drastically alter their perception of how similar two things are.

In this case, your perception is grossly incorrect.
 
Oh, sorry. I'll let other posters with better knowledge of consciousness do that for you, while I point out that since the possibility I just mentioned happens regularily, I can provisionally accept it as the cause of your claim that it hasn't been demonstrated.

Of course you can. I hope you feel that your faith in the "posters with better knowledge" has been reaffirmed. Maybe they can list their qualifications, and assure you that I know nothing at all about the subject (whatever the subject might be). They can explain that my refusal to accept the obvious is due to my agenda, which no doubt includes restricting contraception and closing shops on Sundays.

Meanwhile, I'll wait for a non-circular explanation of why consciousness is computational.
 
A computation is something well defined by the Turing model. According to that model, a computation has a number of properties, viz:

  • The computation consists of the code and the data, which are a fixed quantity.
  • The outcome of the computation is determined by the code and data, and nothing else.
  • The result of any computation is fixed and independent of the speed at which the computation is performed.
  • Even if independent portions of the computation are performed in different orders, or at the same time, this will not effect the outcome of the computation.

This is given in my own words, and is not intended to be rigourous, but I think it's a fair summary of what computation is.

If we deal with robotic control we are faced with the following properties:

  • Robotic control is extremely time dependent.
  • Even though operations might be independent, they might need to be performed simultaneously in order to produce a correct outcome.
  • The data for the robotic control program is neither fixed nor known, and involves interaction with the environment.

Again, this is written in my words, and doesn't claim to be entirely precise or complete. However, it seems clear that computation and robotic control are different in what they do.

I see. So what is one to make of Honda's 'Asimo', or Boston Dynamics' 'Big Dog', or any of the hundreds of other robots controlled by computational processing?
 
But this is just more nonsense of the type westprog is selling.

Look -- the tape of a Turing machine is infinite.

If you want to have a "ongoing process" it just means that the input is huge, and the output is huge, and you are looking at the steps of the algorithm as the algorithm is carried out rather than after it is complete.

For example, it is easy to model a robot using an ideal turing machine -- you just include the entire environment as part of the input. If the robot never reaches parts of the environment, then the machine ( or rather, algorithm ) will never touch those parts of the tape. Not all input needs to be evaluated to give a certain output.

Suggesting otherwise is just an utter failure to understand the nature of reality.

Furthermore, the suggestion that people actually think in terms of idealized TM tapes is just absurd. Everyone that programs thinks in terms of an "ongoing process" because nearly all of computation is so heavily based on continuous input these days. That doesn't invalidate the established fact that ALL computation CAN be reduced to a series of instructions on the tape of an idealized turing machine.

Hi rocketdodger,

I think that you have missed the idea of the paradigm-shift, in this case, which is:

Input is done during the "ongoing process", which is different than the attempt to include at once the entire environment as an input, before acting.

By this approach the considered system enables to change its decisions and activation according to real-time changes in the environment.

In other words, the system is opened to real-time changes, and enables to be effective, without waiting to infinite amount of input, before it starts to act.

This time please try to get the difference between interactive view of computation (which is the new paradigm) and a function-based transformation of an input to an output (which is the old paradigm):

"According to the interactive view of computation, interaction (com-
munication with the outside world) happens during the computation,
not before or after it. Hence, computation is an ongoing process rather
than a function-based transformation of an input to an output."
 
Last edited:
Input is done during the "ongoing process", which is different than the attempt to include at once the entire environment as an input, before acting.

[/I]

Would this be related to brain plasticity? i.e. the brain regions form / adapt in an interdependent development with inputs. It's not just a case of rigid architecture (brain/ computer) interpreting rigid data (coded inputs). In a sense, the brain actually codifies its own inputs - or the inputs structure the brain. One example would be how blind people adapt areas of the brain to increase the haptic sense rather than visual. Is this computational? It certainly would suggest a different relationship between processor / data. Perhaps even the dualism has to go. A processor can also be data and data can also be a processor.
 
Last edited:
Would this be related to brain plasticity? i.e. the brain regions form / adapt in an interdependent development with inputs. It's not just a case of rigid architecture (brain/ computer) interpreting rigid data (coded inputs). In a sense, the brain actually codifies its own inputs - or the inputs structure the brain. One example would be how blind people adapt areas of the brain to increase the haptic sense rather than visual. Is this computational? It certainly would suggest a different relationship between processor / data. Perhaps even the dualism has to go. A processor can also be data and data can also be a processor.
The inserting thing is the ability to act under uncertain conditions, such that uncertainty is a challenge for further development and not only a problem that has to complicity be reduced before conclusion and action (as done by function-based transformation of an input to an output).

An interactive view of computation does not need complete reduction of uncertainty before getting conclusions that are used for actions.

In my opinion Intuition, Creativity and Interactive reasoning are actual properties that, at least, help to survive non-trivial conditions, in addition to function-based transformation of an input to an output.
 
Last edited:
Hi rocketdodger,

I think that you have missed the idea of the paradigm-shift, in this case, which is:

Input is done during the "ongoing process", which is different than the attempt to include at once the entire environment as an input, before acting.

This time please try to get the difference between interactive view of computation (which is the new paradigm) and a function-based transformation of an input to an output (which is the old paradigm):

No I didn't miss it, I am saying that never was the paradigm in the first place.

The only time "computation" is considered in its entirety as a "function based transformation of an input to an output" is in university courses on automata and the associated textbooks.

And even in that case it is fuzzy -- if a Turing machine starts a run, and doesn't make it to a part of the tape by a certain step, then the algorithm doesn't care if you load up the whole input at once or wait until it is needed. This isn't quite "interactive" in the sense that the algorithm doesn't "wait" for input, but it also isn't quite the opposite either, because the input wasn't there when the algorithm started.

In reality, everyone that works with computers treats computation as an interactive process. Yes the atomic instructions of computation are function based transformations, but then again so is every particle interaction in the universe, so that is a moot point. For everything larger than those smallest steps the current paradigm *is* interactivity.

In fact it is so natural for humans to view code as interactive that we have to try hard to keep them functional when they should be. So I don't disagree with you that the interactive paradigm is important, I disagree with your suggestion that it isn't already in vogue.
 
In my opinion Intuition, Creativity and Interactive reasoning are actual properties that, at least, help to survive non-trivial conditions, in addition to function-based transformation of an input to an output.

Not sure why you feel the need to even discuss this. I think you misunderstand the computational model of consciousness.

The computational model simply accepts that the action of an individual neuron is function-based transformation of an input to an output, and that by doing that constantly in an interactive sequence you get creativity and interactive reasoning.

If you want to suggest that the action of an individual neuron is *not* a function based transformation from input to output, then ... you are wrong, because we know for a scientific fact that it is.
 
Not sure why you feel the need to even discuss this. I think you misunderstand the computational model of consciousness.

The computational model simply accepts that the action of an individual neuron is function-based transformation of an input to an output, and that by doing that constantly in an interactive sequence you get creativity and interactive reasoning.

If you want to suggest that the action of an individual neuron is *not* a function based transformation from input to output, then ... you are wrong, because we know for a scientific fact that it is.
Do you think that the only way to define computational model of consciousness is based on an action of an individual neuron?
 
Do you think that the only way to define computational model of consciousness is based on an action of an individual neuron?

It's not a definition, it's a description, an explanation. If you connect a large number of computational units so that they work together, the output(s) is/are the result of computation - because the system is computational, and the brain can be seen as such; an information processing, connectionist (networked) computational system.

Some contributors may be querying whether, in a computational system that, for example, emits a hum of varying frequency when in operation, that hum should be considered computational; IOW, if consciousness is a non-computational side-effect of computational brain activity, perhaps analogous to resonance, we should probably not expect to reproduce it by emulating only the computations that gave rise to it.

My response would be that despite extensive study of brain function on many scales, there is no physical evidence of such an effect, nor any likely physical mechanism for it in that context. The brain appears to be constructed of a large number of computational elements that are fairly well understood - to the extent of functional emulation of significant networks of them; consciousness itself is fairly resilient to minor physical damage, and it's impairments when damaged are typically directly related to the damage in the relevant functional areas of the brain. IOW, it doesn't behave like a serendipitous non-computational side-effect. But it remains an unevidenced possibility.

It seems to me that no-one will be able to demonstrate that consciousness is computational until a computational machine is built that all parties agree exhibits consciousness - and judging by the debate in this thread, that agreement seems unlikely, even if the machine were to pen a heart-rending suicide note and creatively destroy its own power supply out of sheer boredom.

In the meantime, as far as I know, we have no evidence that the brain doesn't function in a computational way, and a great deal of evidence that it does. Consciousness is, beyond reasonable doubt, a result of the normal functioning of the brain, and if that functioning is computational, then it seems reasonable to assume (with the side-effect caveat above) that consciousness is computational - until we have evidence to the contrary. YMMV ;)
 
Do you think that the only way to define computational model of consciousness is based on an action of an individual neuron?

No, you can define it at various higher levels if you wish. For example some researchers don't bother with neurons, they deal with other structures and algorithms that behave in ways similar to how entire groups of neurons behave.

However the thing that brings all the various definitions together is the logical conclusion that since the action of an individual neuron is computational in nature, any combination of computations is also computational of nature, the brain is a combination of neurons, consciousness arises from/in the brain, then fundamentally whatever consciousness is it is almost certainly computational in nature.

The fact that it would be a ton of computations, that happen not only in sequence but in parallel, that happen repeatedly and constantly, and are interactive with the external environment, doesn't change the fact that it is still computational in nature.
 
Last edited:
rocketdodger and dlorde do you think that computational means that our brain is after all predictable?

Furthermore, do you think that our brain is a part of determinist reality, such that uncertainty, creativity, free will etc... are no more than illusions?

For example, let's assume that activity or inactivity of a given neuron is insignificant, is there a clear way to predict when the given neuron will change its current state?
 
Last edited:
Status
Not open for further replies.

Back
Top Bottom