• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Explain consciousness to the layman.

Status
Not open for further replies.
:rolleyes:

There are states in your brain that you are not aware of. Anyway, we were talking about making the subjective objective. Please don't try to make me believe that you forgot that.

Yes, I'm aware of that. I was pointing out that the objective change of state of the brain in connection with subjective experience was already known. It is now known in more detail. It does not imply that subjective experience has been solved. The most we can say is that if subjective experience can be solved by studying the brain, then this is one of many steps in that direction.
 
Even if you could identify such an activity, even that activity could be simulated in a computer.

Let's say, just to throw out a random example, that the physical act of swallowing food is necessary for consciousness to emerge. Of course, this particular example is not really likely, but, for argument's sake, I hope we can indulge in it...

...Who says that such an act can't also be modeled and replicated in a virtual manner, inside a computer?

Such a model might well tell us something about the way that swallowing food takes place. However, such a model will not provide nourishment, or in any way replace or duplicate the physical act of swallowing.

The only bodily function where it is claimed that a simulation of the process is equivalent to the process is in the function of the brain. This claim rests on the assumption that the process of the brain is essentially a computational process - in the same way that swallowing involves moving food from the mouth to the stomach.

This has not been demonstrated.

Comparing the brain to a computer, like all analogies, is not going to be perfect. It doesn't have to be. As long as the general principles of the argument are able to be communicated, we can still use the comparison to that limited degree.

There is always a danger in taking any analogy too far. If a businessman took the words "That tornado was like a freight train running through the town!" too seriously, he might be inclined to try to ship his products via tornadoes.

In the case of how the brain operates: It might be through 0s and 1s. But, non binary digits can still be represnted as binary digits. Neural networks use decimal numbers for "weights" between neurons, to model the essential elments of how the mind works. Yet, those decimal numbers are ultimately stored in memory as a block of 1s and 0s.

If neural networks turn out to be equivalent in functionality to the brain, then perhaps that analogy will turn out to be the most useful. We don't yet know enough for that.
 
Yet, those decimal numbers are ultimately stored in memory as a block of 1s and 0s.

More to the point, it is pretty much universally agreed that the essential function of neurons is to take a seemingly continuous spectrum of input and convert it to something like a digital signal.

It isn't a coincidence that the things we consider "able to compute" all feature similar behavior, that behavior being the ability to map a given set of inputs to a much smaller set of outputs.
 
If neural networks turn out to be equivalent in functionality to the brain, then perhaps that analogy will turn out to be the most useful. We don't yet know enough for that.

It is indicative of how much your view is biased by ulterior motives that you question whether computer models of biological neural networks will be the most useful "analogy" for understanding the behavior of the brain.

What on Earth *else* could be more useful than a model based on neural networks? The brain is composed of neurons. You know that, right?

Honestly -- what else? A model based on what else?
 
Last edited:
More to the point, it is pretty much universally agreed that the essential function of neurons is to take a seemingly continuous spectrum of input and convert it to something like a digital signal.

In the case of how the brain operates: It might be through 0s and 1s. But, non binary digits can still be represnted as binary digits. Neural networks use decimal numbers for "weights" between neurons, to model the essential elments of how the mind works. Yet, those decimal numbers are ultimately stored in memory as a block of 1s and 0s.

I suppose my main point is, that data transmission from A->B is one dimensional in the computational world. A specific information can be located at any given time at a specific place. The ultimate question, and I am going out on a big limp here, is that a conscious idea might not be traced back to a certain activational pattern of neurons at the same time. What I am suggesting is that an information is at the same time part of A->B but also in a wave function of the same or a different network state which it affects at the very same time.

Although I am not sure myself if I outlined that in an understandable way...might have to work on that more :)
 
Such a model might well tell us something about the way that swallowing food takes place. However, such a model will not provide nourishment, or in any way replace or duplicate the physical act of swallowing.
A virtual reality version of swallowing, complete with information about possible nourishment, etc. could provide sufficient input for whatever part of consciousness requires it. (In this example.)

A physical, robotic construct could do the same. Granted, it might not be a "Turing Machine" under your usage of that term. But, it would still be a computing machine.

Either way, there is no reason why conscious awareness can't be constructed and emulated in a computing system.

This claim rests on the assumption that the process of the brain is essentially a computational process - in the same way that swallowing involves moving food from the mouth to the stomach.

This has not been demonstrated.
What is the magical realm of reality, you are thinking of, that would be exempt from being emulated in a computing system?


If neural networks turn out to be equivalent in functionality to the brain, then perhaps that analogy will turn out to be the most useful. We don't yet know enough for that.
Neural networks were designed to emulate how the brain functions, in a relatively abstract, high-level manner.

Neuroscience already knows many of the differences between how neural networks work and how the brain actually works. (For example: Neural networks often use back-propagation through the same virtual neurons that ran the forward processes. In biology, neurons generally process things in one direction, and then hormones generally take up the other direction. (though, reality is evena bit more complicated than that)).

The problem is NOT that "we don't know enough". We know a FREAKIN' LOT, about how the brain works, these days.

The problem is that we can't figure out why there should be inherent limits that the more basic neural networks would have, compared to the more complex actual brains. The ultimate results can, potentially, be the same, even if the proximate details are not.

Your argument would require knowledge about what those inherent limits would be. Do you have any insights to offer about that?


More to the point, it is pretty much universally agreed that the essential function of neurons is to take a seemingly continuous spectrum of input and convert it to something like a digital signal.

It isn't a coincidence that the things we consider "able to compute" all feature similar behavior, that behavior being the ability to map a given set of inputs to a much smaller set of outputs.

I agree with that, in a very general sense. But, I don't think it's as relevant to the point as you think.

I think the most important property of neurons, for the discussion of consciousness, is that they report on the states of other cells. This makes them uniquely qualified to build and compute models of the self, when bundled the right way.
That they do so by acting as analog-to-digital converters, is only circumstantial.
 
I agree with that, in a very general sense. But, I don't think it's as relevant to the point as you think.

I think the most important property of neurons, for the discussion of consciousness, is that they report on the states of other cells. This makes them uniquely qualified to build and compute models of the self, when bundled the right way.
That they do so by acting as analog-to-digital converters, is only circumstantial.

It is only circumstantial if your goal is to discuss consciousness in a constructive manner.

However if someone else's goal is to show that digital logic could never give rise to consciousness like neurons can, then I think it is extremely relevant to show them that the reason neurons are able to give rise to consciousness is their digital properties.
 
I want to add on this one, because I also have for most of the time a problem when the brain is compared to a computer. I find it highly misleading.
Perhaps your definition of a computer is narrower than ours.

One example is that we have not yet established that neuronal firing relates to an on-and off state of the activity- like in electronics. A transmission might have much more to it than just 0 or 1.
Nobody is suggesting that the brain is just like a binary digital computer. We know neurons are not simple switches; the firing of neurons is fairly well understood.

It might have been addressed already in this thread, as I am new here and do not have the time to go through the thread but how do you define computational activity?
With reference to the brain, I see it as the activity of neurons participating in neuronal circuits. The claim is that the functions of these circuits are Turing-equivalent - i.e. for given inputs, the outputs can be generated (computed) by a Turing machine. The available evidence strongly suggests this is a valid model. The suggestion is that as the brain is composed of a very large number of these circuits connected together, that therefore, as a whole, it is Turing equivalent. This doesn't mean it functions like a current electronic digital computer, but it does suggest computational equivalence (IBM are taking some interesting steps to potentially bridge this divide with their neural processor).

Plausible alternative models of brain function would be welcome.
 
A virtual reality version of swallowing, complete with information about possible nourishment, etc. could provide sufficient input for whatever part of consciousness requires it. (In this example.)

A physical, robotic construct could do the same. Granted, it might not be a "Turing Machine" under your usage of that term. But, it would still be a computing machine.

Either way, there is no reason why conscious awareness can't be constructed and emulated in a computing system.

What is the magical realm of reality, you are thinking of, that would be exempt from being emulated in a computing system?

Is there any process in the human body which can be both simulated and emulated in a computer system? I can't think of a single one.

Generally speaking, an ability to distinguish between reality and the computer simulation of reality is a useful way to get through life. Reality is not a "magical realm". It's where we live. We don't live in a computer simulation (as far as we know).

If there's a belief that the computer simulation of a human brain will produce the emulation of a human brain, then that's a claim that requires more than simple assertion.

Neural networks were designed to emulate how the brain functions, in a relatively abstract, high-level manner.

Neuroscience already knows many of the differences between how neural networks work and how the brain actually works. (For example: Neural networks often use back-propagation through the same virtual neurons that ran the forward processes. In biology, neurons generally process things in one direction, and then hormones generally take up the other direction. (though, reality is evena bit more complicated than that)).

The problem is NOT that "we don't know enough". We know a FREAKIN' LOT, about how the brain works, these days.

The problem is that we can't figure out why there should be inherent limits that the more basic neural networks would have, compared to the more complex actual brains. The ultimate results can, potentially, be the same, even if the proximate details are not.

Your argument would require knowledge about what those inherent limits would be. Do you have any insights to offer about that?

The limits depend on what aspects of the function of the brain are inherently linked to its physical processes, and how much of it can be abstracted. It might be that the characteristics of the brain can be duplicated relatively easily. We can replace the heart with a pump. Perhaps we can replace the brain with a neural network without anything being lost. Perhaps not.

I agree with that, in a very general sense. But, I don't think it's as relevant to the point as you think.

I think the most important property of neurons, for the discussion of consciousness, is that they report on the states of other cells. This makes them uniquely qualified to build and compute models of the self, when bundled the right way.
That they do so by acting as analog-to-digital converters, is only circumstantial.

That the brain works in a precisely digital way is far from certain.
 
I want to add on this one, because I also have for most of the time a problem when the brain is compared to a computer. I find it highly misleading. One example is that we have not yet established that neuronal firing relates to an on-and off state of the activity- like in electronics. A transmission might have much more to it than just 0 or 1.

It might have been addressed already in this thread, as I am new here and do not have the time to go through the thread but how do you define computational activity?


Welcome to the forum.

You are correct the brain can be SIMULATED or maybe even EMULATED using a computational algorithmic model but it is not itself following an algorithm. Much like we can describe the movement of the Planets and Galaxies using mathematical formulas but the planets themselves do not perform any computation to decide where they ought to be the next time increment.

Have a look at these posts
and these if you want more waste of time
 
Last edited:
However if someone else's goal is to show that digital logic could never give rise to consciousness like neurons can, then I think it is extremely relevant to show them that the reason neurons are able to give rise to consciousness is their digital properties.
I can see how demonstrating the digital properties of neurons can go far in arguing that digital systems can give rise to consciousness.

However: Suppose it didn't do that. Suppose much of mental processing remained analog the whole time. (Say, the intensity of electric neurological transmission was analog instead of discreet levels) What then? I still think it would be able to be emulated on a computer, though in "sampled" form.

Generally speaking, an ability to distinguish between reality and the computer simulation of reality is a useful way to get through life. Reality is not a "magical realm". It's where we live. We don't live in a computer simulation (as far as we know).
I know reality is not a magical realm. Do you? It sounds like you are trying to invoke some sort of "magical" aspect of reality that can not be emulated on a computer system.


If there's a belief that the computer simulation of a human brain will produce the emulation of a human brain, then that's a claim that requires more than simple assertion.
There is no reason, without asserting some "magical" property of reality that can't be computed, that a human brain can't be emulated.

It might take us some time to get there, though.

The limits depend on what aspects of the function of the brain are inherently linked to its physical processes, and how much of it can be abstracted. It might be that the characteristics of the brain can be duplicated relatively easily. We can replace the heart with a pump. Perhaps we can replace the brain with a neural network without anything being lost. Perhaps not.
Name something that cannot.


That the brain works in a precisely digital way is far from certain.
It might not be binary digital all the time. But, neuroscience is indicating almost all of it is in digital form. Neurons fire at discreet levels, not with analog waves of intensity.
 
Yes, that's the evaluation that I said that the objective robot would probably provide.
And that's what I'm claiming you should check your warranty for. Certainly I have no control over what a hypothetical entity derives, especially if I have no say in the algorithm. But it would have to mess up to reach this kind of conclusion.
I don't know what you mean by "include too many things".
It's because we're doing different things. You're focused on the definition; I'm focused on the concept.
How can it be in a better position
...
In what way does the objective robot have an advantage when we have full access to objective reasoning?
You're misinterpreting me--I'm not comparing the objective robot that did all of that hard work to what we could do if we did the same hard work. I'm comparing it to what we could do with introspection alone. All I'm saying here is that subjectivity per se doesn't actually give us insight into what subjectivity is--that in order to do that, we have to peer beyond the veil, because the very thing that describes what it is exactly--the "how" to the way that feeling of red is produced--is the very thing we're not privy to in a subjective manner.
I fail to see how your example of the penny differs in any fundamental respect to any physical interaction between any physical objects.
It doesn't. But neither does analyzing the "how" behind the way that feeling of red gets there. It's the same sort of analysis that's needed.

You simply require a measuring device for subjectivity to help you, that's all. And we are that measuring device. And this is not an exceptional thing--we need measuring devices to help us figure out other phenomena as well.
But the way we evaluate whether a penny is a penny is exactly the same way as the objective robot does it.
Not per your example it's not. I certainly don't weigh pennies to figure out if they are photographs of pennies.
We are perfectly well able to objectively identify pennies. We might not know precisely the way that our senses interact with the penny in order to tell that it is a penny, and not a picture of a penny, but we are aware that they act in an objective way.
No, they just act in a particular way. There's no such thing as an "objective way" in this sense. Objective merely describes a type of approach we use to figure out what that way is--to build a conceptual model of it.
However, in parallel to the evaluation of the penny, we have a subjective experience associated with the penny. The objective robot does not.
But you're not seeing the problem.

Approach 1: Define "subjective experience" in such a way that the objective robot knows what you're talking about.

I think you've already claimed this impossible. So now your "objective robot" has a word, and hasn't a clue what it means.

Approach 2: Refine the concept of "subjective experience" by using it, describing it, and so on, until the "objective robot" knows what you're talking about.

You can take this approach. There are facts about subjective experiences. In fact, there are even specific measurable aspects to particular kinds of subjective feels that we have (e.g., intentional binding; also, color-grapheme synesthete performance on certain tests, which can be designed to interfere with or enhance the particular properties). Now, this might not be enough to get the objective robot to know what red "feels like", but oddly enough, these kinds of things should be enough to "objectively demonstrate" that the phenomena of subjective experience is real.
When the objective robot can see no difference between the objective evaluation of the penny, and the objective evaluation combined with subjective experience, then why should it consider that subjective experience fulfills any function?
But if subjective experiences are real, then there should actually be differences, and they should be measurable. If they are not measurable, then we wouldn't be able to measure them ourselves, and thus would have no way to even know of the concept.
 
Welcome to the forum.

You are correct the brain can be SIMULATED or maybe even EMULATED using a computational algorithmic model but it is not itself following an algorithm. Much like we can describe the movement of the Planets and Galaxies using mathematical formulas but the planets themselves do not perform any computation to decide where they ought to be the next time increment.

Have a look at these posts
.....
thanks will have a longer look.

Perhaps your definition of a computer is narrower than ours.

certainly true as it appears :D

With reference to the brain, I see it as the activity of neurons participating in neuronal circuits. The claim is that the functions of these circuits are Turing-equivalent - i.e. for given inputs, the outputs can be generated (computed) by a Turing machine. The available evidence strongly suggests this is a valid model. The suggestion is that as the brain is composed of a very large number of these circuits connected together, that therefore, as a whole, it is Turing equivalent. This doesn't mean it functions like a current electronic digital computer, but it does suggest computational equivalence (IBM are taking some interesting steps to potentially bridge this divide with their neural processor).

thanks, yes I slowly got that understanding, but i certainly have to do more reading on the computational/theoretical side here
 
thanks, yes I slowly got that understanding, but i certainly have to do more reading on the computational/theoretical side here
It does help to have a basic understanding of how neurons operate and how this relates to the brain as a whole; some of the current theories on consciousness; a bit of computational theory, digital vs analogue systems, and so-on. Because of the cross-over between fields and the different levels of abstraction involved, there's quite a lot to cover, but it's fascinating stuff - and offers the possibility of insights into the workings of your own brain/mind.

Be aware that some care is needed in the selection and use of words here, e.g. the difference between 'simulation' and 'emulation' can cause weeks of confusion because they mean different things to different people.
 
Such as? what significant non-computational biological activity in the brain are you aware of that might be relevant?
I am not a biologist so cannot go into much detail in the biochemistry of the brain. There is a great amount of activity in the brain which is not performing the computation necessary for the inner thinker to realise his/her own self consciousness.

For example consciousness may be an emergent property of some kind of electrostatic activity evolved as a means of maintaining the integrity of the body. Which hand in hand with neural activity results in the holographic 3D experience of consciousness.


True enough. So all you need now is some evidence for such activity. What biological activity goes on in the brain that isn't computational in nature and might plausibly be relevant to consciousness?
Chemistry may be mappable in a computer, it doesn't follow that that chemical activity is a form of computation.
 
Even if you could identify such an activity, even that activity could be simulated in a computer.
Yes I accept that and in principle every chemical relationship in the body could be simulated

Let's say, just to throw out a random example, that the physical act of swallowing food is necessary for consciousness to emerge. Of course, this particular example is not really likely, but, for argument's sake, I hope we can indulge in it...

...Who says that such an act can't also be modeled and replicated in a virtual manner, inside a computer?
This is one of my points it would remain in a virtual manner. You may well end up with a virtual consciousness, which is unaware of the physical world. It would only receive signals interpreting something from outside the virtual space in which it dwells.
 
Status
Not open for further replies.

Back
Top Bottom