• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Explain consciousness to the layman.

Status
Not open for further replies.
You are absolutely wrong about this, and if you'd read up on some neuroscience it would be clear why.
I'm reasonably up to date on my neuroscience. The basic principles haven't changed since I first studied it.

The brain operates as a real object in spacetime. Time matters, shape matters.
Shape matters how? Are you suggesting an hypothetical artificial brain must have a particular shape ?

You could not get brain-like behavior from a single physical processor.
So you say. OK, what about a large number of physical processors?

ETA: better still - ISTR you agreed that a machine could be conscious - can you explain what you think the structure of such a machine might be? or what you feel the crucial difference(s) is (are) between the machine you feel can be conscious, and a machine running virtual processes, as I described previously?
 
Last edited:
I think that people who've been programming at a high level think they understand how computers work, because they've learned to think in terms of high level programming languages, using concepts such as object-oriented code or functional programming.

Well I can't speak for everyone, but where I got my degree, computer architecture coursework was a mandatory part of the computer science curriculum.

So that pretty much invalidates the entire rest of your post -- I know exactly how computers work, from the way the transistors are laid out in RAM to the way the logic gates are organized in the ALU.

First, you are wrong about branching instructions : despite your red herring about processors carrying on with instructions and not branching ( which is just an optimization, and in fact many times processors have to unravel what they did because they mistakenly assumed a branch would not be taken ), branching instructions are the essence of computing.

Second, even if you were correct about a program lacking branching, it doesn't matter because an instruction putting a result in a register for the next instruction to use is a causal relationship between those two instructions -- the data one instruction operates on is determined by the previous instruction. Yes one instruction doesn't cause the next instruction in such a case, but the result of an instruction certainly causes the result of the next instruction, and if you think that is somehow different then you are just playing word games.
 
If you want to build a conscious machine, you have to reproduce more than merely the relationships between the changes.

Yes yes you keep saying this, but you have produced zero actual evidence, and less than zero coherent arguments.
 
You can say this about anything real, as long as you're talking about physical computation.

No, you can't.

Any behavior of a physical system can be described as a series of transitions between states that follow SOME set of rules.

However, a given set of rules does NOT apply to ALL systems.

Otherwise, we could use a rock just like we use a calculator.
 
It is more important to understand how brains work. In fact, at this point, that's the only important thing.

This pretty much sums up your entire argument, piggy.

you-- "Knowing how the brain works is the only important thing."

us-- "I know, we think it works by neurons acting as a kind of biological digital logic gate"

you --"No no no-- how the brain works. That is the only important thing."

us --"Yeah we know -- the brain is a network of neurons, so ...."

you--"Let me stop you right there -- focus on how the brain works, then you will get somewhere."
 
An instruction may change the location of the next instruction to be executed - but it typically will not, and the processor will select the next consecutive instruction.

Wait -- so you are saying that because branching instructions are far less numerous than other instructions, somehow branching isn't important in programs?

I hate to bring up common sense, but that is like saying that because the ion channels at a synapse are far less numerous than the rest of the ion channels in a neuron, activating other neurons isn't an important feature of neurons.

The fact is, branching is the essence of computing, and you know it. To say otherwise is either deliberate misinformation or just lack of knowledge.
 
Shape matters how? Are you suggesting an hypothetical artificial brain must have a particular shape ?

When I say "shape matters" I mean that shape makes a difference in the function of the brain in the same kind of way that it makes a difference in my truck (of course).

If I want to build something that does what my truck does, I have to take into account not just a change of small-scale reactions, but also larger-scale phenomena which also depend on shape (such as systems using pressure, for example).

The same is true for other organs of the body -- there's a limit to how much spatial distortion the object can handle before it stops doing some of the things it does.

That said, there's going to a range of possible shapes that could oblige the function... as well as many others that can't.

But if you really want to build a replacement part for some organ in your body -- whether it's a brain or a liver or whatever -- then yeah, you have to take shape into account in your understanding of what it does and how it works in the real world.

Did you really imagine that this could be ignored?
 
ETA: better still - ISTR you agreed that a machine could be conscious - can you explain what you think the structure of such a machine might be? or what you feel the crucial difference(s) is (are) between the machine you feel can be conscious, and a machine running virtual processes, as I described previously?

Of course not.

Until we know how the brain performs the task, that question is unanswerable.

That's precisely what biologists are trying so hard to discover.
 
Piggy said:
If you want to build a conscious machine, you have to reproduce more than merely the relationships between the changes.
Yes yes you keep saying this, but you have produced zero actual evidence, and less than zero coherent arguments.

Relationships between the changes in the states of a system -- like relationships between anything else -- can be reproduced in any number of formats.

In some circumstances, you can reproduce those changes accurately enough that you can look at the format you're representing the changes on and tell what's going on in the real system. (If you know how to read the representation in the new format.)

Take the flight simulator, for example.

But if the real-world behavior of the actual physical system of this new format is different from the actual physical behavior of the system whose changes you're reproducing -- ETA, in ways that affect the functionality -- then the thing you're using to preserve this dynamic information will not actually work like the thing it's representing.

The simulator machine won't fly by virtue of running the sim, but only if you also build it so that it can fly.

Period.

No argument is reasonably necessary beyond those simple facts.

If you want a thing to do what another thing does, then given the fact that we live in a universe made of matter and energy, you have to make it do similar physical things.

Of course, if you want a machine to do symbolic things, that's another story. And if you want that to happen, then you cannot speak of what the machine is doing alone... for this to happen, you must (it is not optional) involve a designer/user as well as the machine, or else you cannot have any symbolic tasks.

And it does not matter how your symbolic system operates. This must be true regardless, by defintion.

But the brain is a physical object like all others.

Whatever it does, as far as we know, it does with matter and energy alone.

That's basic physics.

If you want to defend some alternative metaphysics, where "information" in relation to itself can be the cause of real events in spacetime, or any other scenario, be my guest. But don't ask me to accept it without a damn strong argument.

Bottom line: Preserving information, however accurate, about the relationships or changes in the states of a system does not, in and of itself, reproduce the system.

There are no Pinocchio points.
 
Last edited:
Well I happen to hold the view that what a brain does to make consciousness happen is transition from state to state based on the current state, a specific set of rules, and input.

So in my view any system that is capable of transitioning from state to state based on the current state, the same set of rules, and input, is capable of being conscious just like our brains.

This is a bit of a yawner.

You can say this about anything real, as long as you're talking about physical computation.

No, you can't.

Any behavior of a physical system can be described as a series of transitions between states that follow SOME set of rules.

However, a given set of rules does NOT apply to ALL systems.

Otherwise, we could use a rock just like we use a calculator.

Oh, sorry, my mistake.

In that part of the post, when I said "you can say this about anything real, as long as you're talking about physical computation", I was talking about the fact that any system which passes from state to state like another system will do what the original system does... as long as we're talking about physical computation in both cases.

You see, you can't change horses here and expect to end up on the one you started with.

If you want to build something that's going to do what a particular object does, then you have to build it to physically do what that object does, in one fashion or another.

By the same token, if you want to build something that's going to perform the same symbolic task as another object, it can do whatever it wants physically as long as you end up with the right symbols.

Your primary error is to look at the functioning of a physical organ of the body, which can only be reproduced by building a machine that performs similar physical functions, and then abstracting a symbolic representation of its behavior -- and not only that, but a symbolic representation of a logical abstraction of its behavior -- and (inexplicably) believing that the physical system that preserves this symbolic representation for you will somehow get you the results that you'd get if you built a replica.

You have to stay consistent.

You cannot speak of the physical behavior of one system and the logical behavior of another system as if they were equivalent.
 
This pretty much sums up your entire argument, piggy.

you-- "Knowing how the brain works is the only important thing."

us-- "I know, we think it works by neurons acting as a kind of biological digital logic gate"

you --"No no no-- how the brain works. That is the only important thing."

us --"Yeah we know -- the brain is a network of neurons, so ...."

you--"Let me stop you right there -- focus on how the brain works, then you will get somewhere."

Exactly.

That's a perfectly acceptable rendition, as far as I'm concerned.

The fact that you apparently have no clue why those "us" statements are problematic is evidence enough for you to hang yourself.

ETA: Oops, that last line is not suggesting suicide! It's as in "Give a man enough rope..." Please, no mod wrath!
 
I'm reasonably up to date on my neuroscience. The basic principles haven't changed since I first studied it.

However, I don't think my knowledge of the brain is that inadequate, thank you very much. I don't know what you think I need to "understand" above and beyond knowing exactly how neurons function

us-- "I know, we think it works by neurons acting as a kind of biological digital logic gate"

If you want to see just how far the internet overshadows the human brain, look up the numbers for a typical modern computer and the number of computers on the internet.

Even then, the numbers make the internet - very conservatively - 1000 times as complex as the human brain.



In response to the above I submit the following quote from this article which is linked to in Piggy’s post that I quote below.
According to Stephen Smith, a professor of molecular and cellular physiology and one of the lead researchers, the new images revealed the brain to be vastly more intricate than we had ever imagined:
One synapse, by itself, is more like a microprocessor—with both memory-storage and information-processing elements—than a mere on/off switch. In fact, one synapse may contain on the order of 1,000 molecular-scale switches. A single human brain has more switches than all the computers and routers and Internet connections on Earth.​



The above got me reminiscing about this old post
So, basically, the brain has 100,000,000,000 neurons.
The internet has (conservatively) 10,000,000,000,000,000,000 transistors.
Line the two numbers up:
100,000,000,000 neurons
10,000,000,000,000,000,000 transistors


You are comparing apples to oranges......
The components in a transistor are Four layers of doped material.
The number of components that constitute a neuron are NUMEROUS.
Compare COMPONENTS of the neuron to components of the transistor so as to be comparing apples to apples not oranges.


You can see just how vast and complex the internet is when considered as a system. Even though transistors are much simpler than neurons, there are just so many more of them that the numbers swamp everything else.


Not if you do the CORRECT comparison......



Thanks Piggy for the great links.

Paul Allen agrees:

He's giving $300 million to the study of consciousness.

If only Pixy had let him know that it's already been figured out by people who understand computers. Oh, wait....

And as for the simplicity of the neuron, or the intelligence of the Internet, it turns out that your brain has more connections than every computer on earth combined.
And as for the value of the "electric choir" analogy over the computer analogy, you might find this interesting.
 
Last edited:
Your primary error is to look at the functioning of a physical organ of the body, which can only be reproduced by building a machine that performs similar physical functions, and then abstracting a symbolic representation of its behavior -- and not only that, but a symbolic representation of a logical abstraction of its behavior -- and (inexplicably) believing that the physical system that preserves this symbolic representation for you will somehow get you the results that you'd get if you built a replica.

You have to stay consistent.

You cannot speak of the physical behavior of one system and the logical behavior of another system as if they were equivalent.

A point that is constantly missed is that the symbolic behaviour of a system is a function of an external interpretation. Given the vast range of possible interpretations of any actual physical transitions, one could choose almost any mapping one prefers.

Of course one can't use a rock as a calculator. It makes no sense to consider a rock as a calculator - unless one is claiming that calculation happens in the calculator, rather than something done by a person using a calculator. Then we can consider what's happening in the rock and comparing it to what happens in the calculator, and there will undoubtedly be a correspondence to be found.
 
Of course one can't use a rock as a calculator. It makes no sense to consider a rock as a calculator - unless one is claiming that calculation happens in the calculator, rather than something done by a person using a calculator. Then we can consider what's happening in the rock and comparing it to what happens in the calculator, and there will undoubtedly be a correspondence to be found.

Lol, so you claim that when I enter in 123234 + 323523 and the result pops up instantly, I am the one doing the calculation rather than the calculator?

Riiiiiiiiiiight ........

And you claim that if we could find a "mapping" that somehow allowed us to input "123234 + 323523" into a rock, it would spit out the result just like a calculator, after the mapping is applied?

Riiiiiiiiiight.....
 
Last edited:
You cannot speak of the physical behavior of one system and the logical behavior of another system as if they were equivalent.

I have no idea what you are even talking about when you try to make an arbitrary distinction between "logical" behavior and "physical" behavior, given that everything is physical.

That makes no sense.
 
In response to the above I submit the following quote from this article which is linked to in Piggy’s post that I quote below.

Um, yeah, thats kind of the whole "synaptic plasticity" thing that I mentioned, that I already know about.

And don't kid yourself into thinking that every single ion channel in a synapse somehow serves a unique function, that is absurd. They act en-masse.

Try again?

Oh, and since you are trying to argue that the brain functions differently than a computer, perhaps you shouldn't include expert testimony that a brain has more switches than the internet. In case you forgot, switches are kind of what computers are built from. So.... yeah.
 
Last edited:
I could build a giant printing press out of Lego and string. I can't prove that it wouldn't be conscious.

No.

What you have been saying is equivalent to saying that a giant printing press, built of Lego and string, could not print.

I'm saying that a conscious human brain, built as a digital computer, would also be conscious.

Explain, specifically and succinctly, exactly why it wouldn't. Assume the brain and the computer simulation have the same IO.
 
Last edited:
Your primary error is to look at the functioning of a physical organ of the body, which can only be reproduced by building a machine that performs similar physical functions, ...snip ...

No -- I fully understand that.

Your error is to look at the brain and conclude that its primary function can only be reproduced by making something that appears like a brain.

This is the point of disagreement, piggy. I think the required physical functioning is limited to causal relationships between neural activation. You think the required physical functioning is <everything else>. You have zero evidence of this, other than your magic bean theories.

All one needs to do is observe that when people are unconscious, the causal relationships between neural activation just about ceases. The neurons are still alive, still doing that <everything else> magic bean stuff of yours, yet the person isn't conscious.

Common sense should lead any rational person to conclude that it is the causal relationships between neural activation that is responsible for consciousness, then.

Common sense.

How about this, piggy, just answer one simple question -- if we took a brain, and moved the neurons around so they were all lying flat on a big sheet, in such a way that their synapses and axons and dendrites retained the same connectivity, etc, would the brain still work properly? Would it still support consciousness? And don't give me some stupid dodge like "well, we don't know how the body would react to that...how would the head enclose such a large shape ? Wouldn't the person be top heavy?" Just use your imagination.
 
Last edited:
Bottom line: Preserving information, however accurate, about the relationships or changes in the states of a system does not, in and of itself, reproduce the system.

Bottom line: Nobody cares except you, because everyone who supports the computational model doesn't think reproducing the system is important.

I still don't understand why it is so hard for you to just listen to the position of the opposition. Is it really that hard? To just stop trying to put words in people's mouths, and actually pay attention to what they say? Or is it so much easier to just continue on with the strawmen?

This entire thread is basically you telling us that it is impossible to reproduce the full functioning of the brain with a computer, over and over, despite the fact that we tell you -- over and over -- that nobody is claiming otherwise. That is fundamentally the only argument you have made, in all this time, and what is sad is that nobody -- not a single person -- is actually disputing it. Yet you continue ... over ... and ... over ....
 
Last edited:
Status
Not open for further replies.

Back
Top Bottom