• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Explain consciousness to the layman.

Status
Not open for further replies.
Oh, and since you are trying to argue that the brain functions differently than a computer, perhaps you shouldn't include expert testimony that a brain has more switches than the internet. In case you forgot, switches are kind of what computers are built from. So.... yeah.

This is kind of funny.

"What computers are built from" is not the issue.

What the brain is built from is the issue, since brains are conscious and computers are not.

Which means that if you're going to argue that computers can be conscious, you need to explain how you're going to change computers to make that happen.
 
This is kind of funny.

"What computers are built from" is not the issue.

What the brain is built from is the issue, since brains are conscious and computers are not.

Which means that if you're going to argue that computers can be conscious, you need to explain how you're going to change computers to make that happen.

I thought the answer to that was to build computers with more switches and to have those switches configured in ways that reproduce the kind of information processing that brains do.

Whether those switches are made from silicon, carbon, protien, or photons seems to be the sticking point...
 
Piggy said:
Your primary error is to look at the functioning of a physical organ of the body, which can only be reproduced by building a machine that performs similar physical functions
No -- I fully understand that.

Your error is to look at the brain and conclude that its primary function can only be reproduced by making something that appears like a brain.

No, it would not have to "appear like a brain" as long as it works like a brain.

But you seem to understand the word "works" to pertain to something symbolic rather than physical, yet you've offered no justification for that metaphysics.

I assume that the brain works like every other organ in the body, every other known object in the universe, obeying only the laws of physics.

This is the point of disagreement, piggy. I think the required physical functioning is limited to causal relationships between neural activation. You think the required physical functioning is <everything else>. You have zero evidence of this, other than your magic bean theories.

I don't need any beans, or any magic.

Your "causal relationships" are entifications. Plain and simple.

Physical states are real. "Relationships" between those states are abstractions.

If you use a non-similar physical medium to record information about a system, the fact that you're using it for that purpose is irrelevant to everything that's real about that medium.

That's why I can't move into a blueprint of my house. Even if I recorded every piece of information about the house in the blueprint, and even if the blueprint changed in real time with my house, down to the molecule.

And that's why you're wrong when you claim that you can reproduce the real behavior of a real brain by preserving merely the relationships between state changes of the brain.

The relationships are abstractions, and they are preserved in a medium which does not behave physically like a brain.

That's why they can no more be a brain than a drawing can be a waterfall.

So yes, we disagree, but I have nothing to prove.

If you want to claim that preserving information about a system's causal relationships is tantamount to reproducing the system itself, then you're the one who has some evidence to provide.


All one needs to do is observe that when people are unconscious, the causal relationships between neural activation just about ceases. The neurons are still alive, still doing that <everything else> magic bean stuff of yours, yet the person isn't conscious.

Common sense should lead any rational person to conclude that it is the causal relationships between neural activation that is responsible for consciousness, then.

Common sense.

Yes, there are differences in brain behavior that distinguish between conscious and unconscious states.

And yes, the neurons are still alive during all of this.

But no, this does not in any way imply that the "causal relationships" themselves are causing anything at all.

Like the laws of physics themselves, these relationships are abstractions from physical reality.

Preserving the relationships in a different medium isn't going to make the medium -- which is all that's real -- behave like the thing that you're encoding information about.

And that's all that matters at the end of the day.

How about this, piggy, just answer one simple question -- if we took a brain, and moved the neurons around so they were all lying flat on a big sheet, in such a way that their synapses and axons and dendrites retained the same connectivity, etc, would the brain still work properly? Would it still support consciousness? And don't give me some stupid dodge like "well, we don't know how the body would react to that...how would the head enclose such a large shape ? Wouldn't the person be top heavy?" Just use your imagination.

Of course it wouldn't behave the same way.

Why in the world would you think that it would?
 
Bottom line: Nobody cares except you, because everyone who supports the computational model doesn't think reproducing the system is important.

I still don't understand why it is so hard for you to just listen to the position of the opposition. Is it really that hard? To just stop trying to put words in people's mouths, and actually pay attention to what they say? Or is it so much easier to just continue on with the strawmen?

This entire thread is basically you telling us that it is impossible to reproduce the full functioning of the brain with a computer, over and over, despite the fact that we tell you -- over and over -- that nobody is claiming otherwise. That is fundamentally the only argument you have made, in all this time, and what is sad is that nobody -- not a single person -- is actually disputing it. Yet you continue ... over ... and ... over ....

I have to say, that's news to me.

Last I heard, y'all were arguing that it's possible to program an otherwise non-conscious computer to be conscious, and that it's possible for a person inside a simulated world to become conscious of the simulated world.

And I believe you were arguing that a brain made of rope could be conscious.

That's the position I've been arguing against.

Am I wrong? Do you not believe these things?
 
My difficulty in seeing how your argument applies here is that I don't see how emulating the processing functions and interactions of a neuron using an artificial processor (microprocessor) is somehow less physical than using a biological processor (neuron) do that processing. Both the artificial and biological processors are real, physical devices.

What sort of physical device do you think you'd need to manufacture in order to reproduce the function of the neuron?

What would that device have to do?
 
I thought the answer to that was to build computers with more switches and to have those switches configured in ways that reproduce the kind of information processing that brains do.

Whether those switches are made from silicon, carbon, protien, or photons seems to be the sticking point...

For hundreds of years, human beings have always designed networks of switches of various levels of complexity. They all had the same thing in common - they were designed to switch something. A lock gate was designed to let water in or out. It's function was entirely different to the gate controlling access to a field with sheep in it, or the tap on a beer barrel.

It's only in the area of computing that we consider the switches alone, without concerning ourselves with what is being switched. We don't care, in a computer, what is being switched, provided we can switch it.

Whether or not we can do the same with the functioning of the brain is still entirely unknown. It might reduce to a network of switches. It might not.
 
For hundreds of years, human beings have always designed networks of switches of various levels of complexity. They all had the same thing in common - they were designed to switch something. A lock gate was designed to let water in or out. It's function was entirely different to the gate controlling access to a field with sheep in it, or the tap on a beer barrel.

It's only in the area of computing that we consider the switches alone, without concerning ourselves with what is being switched. We don't care, in a computer, what is being switched, provided we can switch it.

Whether or not we can do the same with the functioning of the brain is still entirely unknown. It might reduce to a network of switches. It might not.

Well I guess there are hormones and neuro transmitters and other things carrying information back and forth through the brain in complex ways, but I don't know if that means we can never figure it out, nor reproduce that complexity using computer.

Our (meaning humans in general) understanding of a lot of things has improved enormously in just the last 100 years, I see no reason to think we will be forever stuck at our current level of understanding.

100 years ago radio was like the internet is today, Silent Films were still scaring people with pictures of trains coming towards the camera. Another hudred years of that kind of progress might take us lots of interesting places that we can't see from here. Maybe not to Star Trek, but definitely somewhere else than here and now.
 
So in round numbers, call it 100 billion * 10k * 1k = 1 quintillion:

1,000,000,000,000,000,000


There's your problem. There are well over a billion computers in the world with an average of between 10 and 100 billion transistors each, so very conservatively:

10,000,000,000,000,000,000

And probably closer to:

100,000,000,000,000,000,000
Interesting point. You're using new-computer 2012 numbers, but as I go back and look at the average-computer 2010 figures again, it still seems funny. I'll ask Stephen next time I see him. I'm sure he'll be crushed to hear that his press release might have been technically incorrect. Wonder when it passed the mark, then?

So even if we count not just neurons, not just synapses, but individual proteins, it doesn't work. The human brain is not the most complex known system.
Adding together every computer on the planet isn't anything I'd consider a system. Nor does complexity depend solely on raw numbers. China has an easier time reverse-engineering the latest Intel chips than anyone has ever managed with a numbers-equivalent hunk of brain, despite using disturbingly similar techniques.

I think you're taking this too seriously. Don't listen so much to Leumas, it was never intended to be anything more than an idle pissing contest.

For hundreds of years, human beings have always designed networks of switches of various levels of complexity. They all had the same thing in common - they were designed to switch something. A lock gate was designed to let water in or out. It's function was entirely different to the gate controlling access to a field with sheep in it, or the tap on a beer barrel.

It's only in the area of computing that we consider the switches alone, without concerning ourselves with what is being switched. We don't care, in a computer, what is being switched, provided we can switch it.
If I show you someone using locks of water to perform the same computations usually done with electricity, will you apologize for dragging out your misunderstanding for so long?
 
Last edited:
Quite right. Machine intelligence is bloody complicated. Mere consciousness is easy.


…sure is…all you have to do is wave your hands about and insist it is so…and voila…it is so!

Buuuuuuuuuuuuuuuuuuuuuuuuuuuuuut….since it has been all but conclusively acknowledged by just about everyone with any kind of credible dog in the race that, well, we don’t actually know what consciousness is…it is slightly disingenuous to define something as such…is it not?

Pixy response: “ no “

No. Life is chemistry.


(except when it isn’t)

No, those characteristics are understood.

Life is a set of particles in a configuration that behaves in such a way as to increase the chances of remaining in a similar configuration that can behave in a similar way.


…this is understanding????

Sounds more like ballroom dancing - or poetry - than chemistry. You could be talking about anything….a rock. A configuration of ‘rock particles’ that behave in such a way as to increase the chances of remaining ‘rock particles’ that can then behave in such a way as to ‘increase the chances of remaining rock particles’…ad infinitum (or until the awe-inspiring Bagger 288 :jaw-dropp comes along and informs them that their life as ‘a configuration of rock particles’ is about to come to a conclusion)!

Could the various components of the Bagger 288 constitute the aforementioned 'configuration of particles' that are inclined to remain...as a Bagger 288? For more on the matter...(and some utterly irrelevant but entertaining nonsense) see: http://www.youtube.com/watch?v=azEvfD4C6ow.
 
The distinction's not arbitrary.

Yes, everything is physical, all real events are matter and energy.

But if you take the example of the marble machine, you can easily see that an animal can use physical computations to help him with symbolic (or logical) computations.

Only the physical computations exist independently, as matter and energy.

The logical (or symbolic) computations exist in a system which includes a designer/reader, as well as a physical medium that is used for en-/de-coding the information.

That's not an arbitrary distinction.

In fact, it's a very important one.

I hate to sound like westprog, but can you please provide an objective definition for "logical" or "symbolic" computation that doesn't include "physical" computation?
 
Your "causal relationships" are entifications. Plain and simple.

Physical states are real. "Relationships" between those states are abstractions.

Wait a second -- how does a system go from one state to another, piggy?

Magic beans? Random chance?

Are you saying that there is no statistical difference when it comes to which states a system can assume next, given any current state?

That seems to fly in the face of everything we know about the world, piggy. People tend to think, for instance, that if they are standing on the ground, there is very little chance of gravity suddenly reversing and them going flying up into space.

That you disagree with this most basic principle of reality is indicative of just how confused your arguments have become ...
 
I have to say, that's news to me.

Last I heard, y'all were arguing that it's possible to program an otherwise non-conscious computer to be conscious, and that it's possible for a person inside a simulated world to become conscious of the simulated world.

And I believe you were arguing that a brain made of rope could be conscious.

That's the position I've been arguing against.

Am I wrong? Do you not believe these things?

It has not been established that any of those things require reproducing the full functionality of the brain.

The argument is whether or not reproducing a subset of the brain's functionality is sufficient. When you chime in with the usual "but that doesn't reproduce the full functionality" it is just a red herring, and like I said nobody cares but you.

Here, let me show you how stupid it sounds:

person A -- "I think only a subset of the brain's functionality is required for consciousness, and this is how to reproduce that subset in a computer."

person B -- "But that isn't reproducing the full functionality of the brain."

person A -- "I know, I don't think you need the full functionality."

person B -- "But that isn't reproducing the full functionality of the brain."

person A -- "Ummm...."
 
A configuration of ‘rock particles’ that behave in such a way as to increase the chances of remaining ‘rock particles’ that can then behave in such a way as to ‘increase the chances of remaining rock particles’…ad infinitum

Nope.

Rocks remain rocks because the "rock" configuration is a local energy minima. They don't do anything to increase the chances of remaining in that minima because they are already at the minima.

Cells, and all of life, are not in a local energy minima. In fact they are far from it -- the hallmark of life is being able to maintain a state of extremely high potential energy. Potential energy which is then used .... you guessed it ... to behave in such a way as to increase the chances of remaining alive that can then behave in such a way as to increase the chances of remaining alive ... ad infinitum.
 
Interesting point. You're using new-computer 2012 numbers, but as I go back and look at the average-computer 2010 figures again, it still seems funny.
Actually, that first number is more like 2004. My Pentium 4 from that year had 2GB of RAM and a 256MB video card, so ~20 billion transistors. The average new computer from that year would have been about half that, and the average computer in use would have had less again.

My new computer for 2012 has 300 million transistors (ignoring the SSD), but the average new machine would probably have less than half that.

So that first number is very, very conservative.

Adding together every computer on the planet isn't anything I'd consider a system.
Just the ones actually connected to the internet. That is a system - loosely coupled, but a system.

Nor does complexity depend solely on raw numbers. China has an easier time reverse-engineering the latest Intel chips than anyone has ever managed with a numbers-equivalent hunk of brain, despite using disturbingly similar techniques.
Well, China hasn't reverse-engineered the latest Intel chips, but researchers have (for example) simulated the activity of a rat neocortical column.

But yeah, just counting active components doesn't tell the whole story. You have to also account for things like switching frequencies - and there the transistor has a six or seven digit advantage.

I think you're taking this too seriously. Don't listen so much to Leumas, it was never intended to be anything more than an idle pissing contest.
Well, sure. This particular example doesn't matter. But the broader point about arguments from authority does matter.

If I show you someone using locks of water to perform the same computations usually done with electricity, will you apologize for dragging out your misunderstanding for so long?
We already showed him a computer based on marbles and wooden toggles, and it didn't change his position one iota.
 
This is kind of funny.

"What computers are built from" is not the issue.

What the brain is built from is the issue, since brains are conscious and computers are not.

Which means that if you're going to argue that computers can be conscious, you need to explain how you're going to change computers to make that happen.
I can see... Seven. Seven fatal problems with your argument. Which is pretty good going for four sentences.
 
I have to say, that's news to me.

Last I heard, y'all were arguing that it's possible to program an otherwise non-conscious computer to be conscious, and that it's possible for a person inside a simulated world to become conscious of the simulated world.

And I believe you were arguing that a brain made of rope could be conscious.

That's the position I've been arguing against.

Am I wrong? Do you not believe these things?

I was certainly under the impression that it was supposed to be possible in principle to reproduce the digital structure of the brain, and to simulate all the sensory input, and that supposedly the subjective experience would be identical. If nobody believes that, I'm pleased to hear it.
 
What sort of physical device do you think you'd need to manufacture in order to reproduce the function of the neuron?

What would that device have to do?

One way to find out what the essential functionality of the neuron is would be to produce an artificial neuron that was "plug-compatible". If such a neuron could replace human nerve tissue, for example (in fact, more likely dog or chimp nerve tissue due to ethical considerations) then that would indicate that any elements not present in the artificial device were no essential.

That's not to say, of course, that a complete network of artificial neurons would need to be of the type that could plug into a human in order to produce consciousness - necessarily.
 
If I show you someone using locks of water to perform the same computations usually done with electricity, will you apologize for dragging out your misunderstanding for so long?

Since I said in that post that computing works the same way on different types of switches, it's hardly likely that having my statement confirmed will cause me to change my mind. The assumption that the functioning of the human brain is purely computational is precisely the point at issue.

In fact, I said it IN THE SAME BIT YOU QUOTED.
 
I was certainly under the impression that it was supposed to be possible in principle to reproduce the digital structure of the brain, and to simulate all the sensory input, and that supposedly the subjective experience would be identical. If nobody believes that, I'm pleased to hear it.

... Which sort of turns the discussion back to the beginning again. People are mostly talking past each other because they use completely different notions/definitions when referring to consciousness. Some are looking at it from a more abstract and principal point of view (as a mechanism); others include the whole spectrum of human subjective experiencing into one particular instance of consciousness (mechanism + qualia). Different perspectives will yield vastly different kinds of explanations (or different explanatory levels, to be more precise).
 
No, those characteristics are understood.

Life is a set of particles in a configuration that behaves in such a way as to increase the chances of remaining in a similar configuration that can behave in a similar way.

Very simple, very elegant, yet it leads to amazing complexity of behavior.

That is the difference between a human and a rock, fundamentally. When lava is coming towards us, we move out of the way, thus keeping the configuration ( I.E. "not dead" ) that allows us to repeat similar behavior in the future.

Any system can do this, it just so happens that life is by far the best at it. That is why life has existed for billions of years in a very similar form, and why all of us can trace our line of cells back to the very first cell billions of years ago.

If you think about it, given how fragile a cell is compared to a rock, it is amazing that a little cell has managed to keep itself going for billions of years. Billions of years.

Yes I agree, my point though was that the (fundamental) form of matter* and the form of the laws of nature may be such that life is possible. Whereas a kind of matter which merely obeys the mathematically consistent modeling of physicists may not produce life at all.

We would have no way of distinguishing the two scenarios without making the following assumption.

Again there are assumptions that the physical modeling of physics is accurately or comprehensively abstracting the nature or form of matter.

With all these assumptions around us, I would like to see this virtual being before considering it theoretically possible.


* I regard matter as spacetime and spacetime as matter.
 
Last edited:
Status
Not open for further replies.

Back
Top Bottom