• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Explain consciousness to the layman.

Status
Not open for further replies.
But we're winding up floating further and further away, as if our brains have a special property nothing else has, which is the ability to have "imaginary" things that are not "physical", the ability to "interpret" which "objective" things cannot possibly have since they could be interpreted infinitely number of ways (and physics is only made of "objective" things), and so on. By all rights, then, if I take this to its logical extreme, we shouldn't be able to imagine, we shouldn't have subjective views, and we shouldn't be able to mean anything when we make claims--much less run a simulation that's supposed to be about a system--because, we're physical! Ergo, we're objective. Ergo, we just don't have the stuff to generate these things.

But we do.

So someone's wrong.



But that is precisely the point..... the fact that we can imagine things is exactly the point.

When we imagine things that have never existed in reality it means that we created something that has no basis in reality.

This does not then make it possible to become reality just because our real brain imagined it. There will never exist a flying spaghetti monster just because we could imagine one.

Therein lays the problem with this debate.

We can IMAGINE that a simulated sentient world can exist in the ones and zeros of silicon chips.....but that does mean that it is POSSIBLE for this imaginary construct to actually exist.

There are REAL PHYSICAL constraints why it cannot exist. These constraints cannot be IMAGINED AWAY.

We can imagine that a machine that simulates the action of the brain as we understand it may give rise to a brain like a real brain. But the imaginary aspect did not take into account the real physical constraints why this may not be possible.

The brain is the result of billions of years of evolution that eventually gave rise to the bundle of biological matter that interacts within and without itself and can maintain electrical impulses from within and without while also modifying, reverberating, attenuating, augmenting and initiating these signals and cross talking and cross sparking and so on and so forth along with a combination of internal and external positive and negative feedback systems that give rise to even more feedback.

I think it stands to reason that an inert collection of doped Silicon might not quite be up to the same task since the kind of processes that occur in the brain are not taking place regardless of the simulation being run. The physical process is NOT the same process.

The design of a high frequency circuit has to take into consideration the effects of lengths and width and proximity of conducting lines and ground planes which at low frequencies do not affect the system. A perfectly working digital logic circuit can fail if the frequency of switching is raised beyond a certain level due to capacitances and inductances that at the lower frequencies had no effect while at the higher frequencies made all the difference.

When we build scale models to carry out some experiments say of earthquake effects on a dam we do not just scale down things. There has to be further consideration for the fact that some things behave differently at a small scale than at the larger scale. For example the surface tension of water and Van der Waal forces can come into play at the smaller scale while at the larger scale they are immaterial.

Take for example the Jesus Lizard. If it is scaled up it won’t be able to run on water….yet it is the same lizard for all intents and purposes. Something got lost in the transformation…. What is it?

What I am trying to say with all this is that certain SYNERGETIC and EMERGENT properties of COMPLEX systems can be drastically affected due to differences in physical interactions within the subsystems and changing the nature or scale of these physical interactions will change the overall system and most likely not give rise to the same emergent and synergetic effects.


See this post for more on this.
I think the problem with all this "could", "may be", and "possibly" is that most of the people who are hypothesizing that "simulation=reality" have either never built a simulation or a computer or neither.

If one actually builds a computer from scratch....I do not mean assemble one.... I mean actually make a processor from scratch using FPGAs or actual transistors and all the memory and other peripherals needed.... then one might get an appreciation for how unlikely that it would ever become conscious regardless of the sophistication of the simulation software it is running.

The fact that a computer needs software is PRECISELY why it is not ever going to be a brain. Brains DO NOT RUN SOFTWARE.

In my opinion the only thing that we might build that has any chance of approaching a brain is an actual brain-like mechanism like Neural Networks. And I do not mean a SIMULATED NN.... I mean an actual one with OpAmps and actual neural connections.....and even then it would have to have a certain CRITICAL MASS of connections and nodes.

I personally think that consciousness is an EMERGENT PROPERTY of A CRITICAL MASS of COMPLEXITY..... much like the individual cells in a body ALONE would not be able to crawl out of a primordial pool but as they COALESCED they created a SYNERGY where the whole is greater than the sum. The reason brains do more than just input and output is an EMERGENT PROPERTY OF THE CRITICAL MASS of brain matter and activity. The brain can be its own SIDE-EFFECT INPUTS that are not actually inputs from anything real except that they are a result of INTRA-CEREBRAL activity.
In other words, because of the brain’s bundling it has become its own “universe” where echoes of PAST EXTERNAL inputs may reverberate and rebound and regenerate and be maintained and these become side-effect inputs to other systems within the brain. The same for brain outputs…. they too can be side-tracked and become UNINTENDED inputs to other parts and again be maintained and reverberated etc.
Look at epileptics…. They often report that just before a seizure they see images and or hear sounds and often smell aromas that to them are as real as the real thing. We know epilepsy is a result of UNREGULATED CROSS FIRING of electrical activity from one part of the brain to another. What if on a smaller and SUBTLER scale some SHORTING can actually produce EVOLUTIONARY SELECTED FOR effects. Maybe THOUGHT is nothing but “epileptic fits” so to speak that have elevated the ENVIRONMENTAL FITNESS of the organisms that had them instead of producing convulsions and loss of control over the body... :confused::confused::confused:

If that is the case then maybe even Neural Nets won’t reach that threshold even with a critical mass unless we allow for RANDOM SHORTINGS that eventually evolve into CONTROLLED SHORTINGS:confused: :confused: :confused:
 
Last edited:
I would like for someone to define exactly what is meant by "objective."

If all an agent can ever have access to is its subjective perception of its environment, I question how "objectivity" can ever be arrived at other than as a net agreement among the subjectivity of multiple agents.

If that is the case, then why is our perception of our world "objective" while the simulated agents' collectively agreed upon perception of their world isn't.

???
Good question, this goes back to what I was saying about ontologies. With an ontology you assume that a certain idea about existence is the actual one and work from there. The trouble is it is beyond us to determine what the actual ontology is, because we can only know what it appears to be to us, which might be a fabrication or simulation.

So the objective is what it is that exists, what we are assuming by the ontology we choose.

This is why I agree with you entirely if I assume the materialist ontology. This is one kind of monism, there are others including the one I mentioned earlier of an ontology of self( which is a monism).
 
But that is precisely the point..... the fact that we can imagine things is exactly the point.
But our brains are physical objects. The fact that we can imagine things does not mean that our brains violate the laws of physics.
When we imagine things that have never existed in reality it means that we created something that has no basis in reality.
When we imagine things that have never existed, we are not breaking the laws of physics. Somehow being able to do this must be something that things following the laws of physics can do.

Incidentally, the phrase "created something that has no basis in reality" is clumsy. I don't know what you mean to convey here, but it just sounds flat out impossible to me; if you create it, then it would exist in reality. If it doesn't exist in reality, no creation has occurred. Perhaps all that occurred is that you created a map to something that doesn't exist.

Remember, however it is that we imagine things that do not exist, if we do that, it must be physically possible to do that. Because we are physical.
This does not then make it possible to become reality just because our real brain imagined it. There will never exist a flying spaghetti monster just because we could imagine one.
And yet, you can cause me to imagine the FSM just by mentioning his noodly name. And I'm a physical entity. So it must be possible for physical entities to imagine the FSM.
Therein lays the problem with this debate.
The problem with this debate comes up whenever someone invents a kind of partition that implicitly rules out the possibility for a physical lump of matter to do something that we know that physical lumps of matter do, in fact, do. So for example when we separate "physical computation" from "symbolic computation", drawing this ever so critical distinction that we must draw in order to understand it; and require that physical computation have an "interpreter" to count as information processing, we shoot ourselves in the foot, because now we've explained that nothing in our brains can create an interpreter, because everything happening in our brains is meaningless without one.

After all, as westprog would say, there are an infinite number of mappings between the things our brain does and what it means--and anything our brain does can mean anything we want it to mean.

You don't think that is a problem?
 
Last edited:
This is why I agree with you entirely if I assume the materialist ontology. This is one kind of monism, there are others including the one I mentioned earlier of an ontology of self( which is a monism).
Right. The interesting thing here, though, is that materialism is consistent with all evidence, and other ontologies are consistent with the evidence precisely insofar as they are consistent with materialism.
 
The problem with this debate comes up whenever someone invents a kind of partition that implicitly rules out the possibility for a physical lump of matter to do something that we know that physical lumps of matter do, in fact, do. So for example when we separate "physical computation" from "symbolic computation", drawing this ever so critical distinction that we must draw in order to understand it; and require that physical computation have an "interpreter" to count as information processing, we shoot ourselves in the foot, because now we've explained that nothing in our brains can create an interpreter, because everything happening in our brains is meaningless without one.

After all, as westprog would say, there are an infinite number of mappings between the things our brain does and what it means--and anything our brain does can mean anything we want it to mean.
Right. That argument is dualism, pure and simple. Either the argument is logically inconsistent, or the Universe is. And there's no evidence for the latter.
 
Ok, I'm confused. Isn't that the opposite of what you're been arguing ?

I've been arguing that conscious entities on a computer don't necessarily exist. However, if they do, they exist in the same world that we do. They don't exist in a different, virtual world. They will experience the world differently. Their experience of the world will be defined by their sensory apparatus and physical makeup.
 
Yes, that happens to be the point of the hypothetical. :rolleyes:



I'm not. You haven't been following very well.

To be fair, that is the point of this hypothetical. There are other segments where the hypothetical existence of conscious entities is used as evidence for their actual existence, but here we're discussing the real existence of other worlds. (I think, there seem to be cross purposes.)
 
Good question, this goes back to what I was saying about ontologies. With an ontology you assume that a certain idea about existence is the actual one and work from there. The trouble is it is beyond us to determine what the actual ontology is, because we can only know what it appears to be to us, which might be a fabrication or simulation.

So the objective is what it is that exists, what we are assuming by the ontology we choose.

This is why I agree with you entirely if I assume the materialist ontology. This is one kind of monism, there are others including the one I mentioned earlier of an ontology of self( which is a monism).

I think that some kind of underlying reality is the basic assumption of materialism. If there isn't some kind of underlying reality, then there is only our own subjective experience, and no reason to assume that any other kind is possible.
 
The problem with this debate comes up whenever someone invents a kind of partition that implicitly rules out the possibility for a physical lump of matter to do something that we know that physical lumps of matter do, in fact, do. So for example when we separate "physical computation" from "symbolic computation", drawing this ever so critical distinction that we must draw in order to understand it; and require that physical computation have an "interpreter" to count as information processing, we shoot ourselves in the foot, because now we've explained that nothing in our brains can create an interpreter, because everything happening in our brains is meaningless without one.



I wonder how many "physical lumps of matter" you can list that do NOT do what ONLY ONE KIND of physical lump of matter does.

I know of mountains, I know of mounds I know of rocks and volcanoes and sand dunes and Earth itself and the moon and the sun and the galaxy and piles of ***** (wait... this last one might be a possibility judging by some people's cranial lumps).

NONE are currently capable of doing the action of ONE KIND of lump of matter.

Some of these other lumps have existed for a lot longer than the brains of any animals....yet they did not manage to achieve the action of the brain.

THE ONLY lump of matter that managed to achieve consciousness so far in billions of years of the existence of Earth is the brain...... can you name any other "physical lump of matter" that are conscious?

So I think your statement “we know that physical lumps of matter do” in fact implies DOING NOTHING….right? That is what almost all “physical lumps of matter do”…. nothing.

So if you can tell me what other lumps of matter are prone to developing consciousness other than brains, then maybe we can entertain the idea that silicon chips might do so.

So assuming that since ONE TYPE and NO OTHER of physical lumps of matter can achieve consciousness then others are just as easily going to do it, is a bit unfounded in epistemological facts.

But what “we know that physical lumps of matter do” in 99.999999% of all lumps of matter that exist is NOTHING. Just because 0.000001% of all matter that exists on Earth in the last 5 billion years managed to evolve the ability to be conscious does not imply that a silicon chip will do it by any stretch of epistemological imagination.

And this is not dualism as some misguided people might like to equivocate.

Saying that a mountain is not a desert is not dualism. It is stating that two things that are different are not the same. Saying that a silicon chip running a simulation is not a brain is like saying a lump of cheese is not a car..... that is not dualism.

Have a look here for a proper definition of dualism as related to this topic.
 
Last edited:
I've been arguing that conscious entities on a computer don't necessarily exist. However, if they do, they exist in the same world that we do. They don't exist in a different, virtual world. They will experience the world differently. Their experience of the world will be defined by their sensory apparatus and physical makeup.

Which is exactly what I was saying, using different words. Did you really not understand what I meant by "world" ?

Really ?
 
But our brains are physical objects. The fact that we can imagine things does not mean that our brains violate the laws of physics.
When we imagine things that have never existed, we are not breaking the laws of physics. Somehow being able to do this must be something that things following the laws of physics can do.

Incidentally, the phrase "created something that has no basis in reality" is clumsy. I don't know what you mean to convey here, but it just sounds flat out impossible to me; if you create it, then it would exist in reality. If it doesn't exist in reality, no creation has occurred. Perhaps all that occurred is that you created a map to something that doesn't exist.

Remember, however it is that we imagine things that do not exist, if we do that, it must be physically possible to do that. Because we are physical.

And yet, you can cause me to imagine the FSM just by mentioning his noodly name. And I'm a physical entity. So it must be possible for physical entities to imagine the FSM.



And what does all this gobbledygook mean? Does it mean that the FSM exists?

Does it imply that Superman can see through women's skirts and burn you by gazing at you?


You imagining a conscious silicon chip running a simulation makes it just as likely to occur as a Superman who leaps higher than skyscrapers.

Now, imagine a scantily attired Wonder Woman and you might convince me :D
 
Last edited:
Which is exactly what I was saying, using different words. Did you really not understand what I meant by "world" ?

Really ?

You may well have the same understanding of "world" that I do - in which case, we are not disagreeing. I am disagreeing with the claim that these virtual worlds are real (whatever that means).
 
And what does all this gobbledygook mean? Does it mean that the FSM exists?

It means there is a group of particles in the brain of yy2bggggs the behavior of which maps to what yy2bggggs considers the behavior of the FSM.

At the very least the behavior of the FSM exists as those particles.

Whether or not there is another set of particles that constitute the FSM floating around in the upper atmosphere or wherever he is said to live is irrelevant if all we want to establish is some sort of baseline existence of the behavior of the FSM.
 
... The problem with emulating a physical neuron in software is that your software emulation is useless... that is, unless the physical apparatus running it can also take the physical input of a neuron and convert it into the physical functional equivalent of the output of a neuron, which is to say, the kind of physical output the next neuron will accept.

I'm sorry, I assumed saying that would be stating the obvious. Take it as read that the black box can interface appropriately with the neurons providing its inputs and the neurons receiving its outputs.
 
These things which you call "practical difficulties" are real features of the system which are known by direct observation and experiment to affect the behavior of the system.

You don't just get to ignore them.
OK, if you can enumerate those features you feel would invalidate this thought experiment, perhaps I can find a way around them so we can progress.
 
And what about replacing my whole truck with something else that does exactly what my truck does?
I know that is possible, both in theory and in practice.

Do you think, given the context discussed, such a black box is theoretically possible?
 
The brain is the result of billions of years of evolution that eventually gave rise to the bundle of biological matter that interacts within and without itself and can maintain electrical impulses from within and without while also modifying, reverberating, attenuating, augmenting and initiating these signals and cross talking and cross sparking and so on and so forth along with a combination of internal and external positive and negative feedback systems that give rise to even more feedback.

Not 'cross sparking'. Not any sparking.

What I am trying to say with all this is that certain SYNERGETIC and EMERGENT properties of COMPLEX systems can be drastically affected due to differences in physical interactions within the subsystems and changing the nature or scale of these physical interactions will change the overall system and most likely not give rise to the same emergent and synergetic effects.

It's a very good point. I suspect we'll only know for sure whether it will be a problem by trying to emulate brains. I also suspect the by the time we can emulate the simplest mammalian brain, we'll understand enough to know whether it's likely to be an insurmountable problem or not.
 
Status
Not open for further replies.

Back
Top Bottom