• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Explain consciousness to the layman.

Status
Not open for further replies.
The way I see it it looks to me at present is:

If, as the evidence suggests, a neuron is a sophisticated information processor, taking multiple input signal streams and outputting a result signal stream, we can, in theory (and probably in practice), emulate its substantive functionality with a neural processor (e.g. a chip like IBM's neural processor, but more sophisticated).

If, as the evidence suggests, brain function is a result of the signal processing many neurons with multiple connections between them, we can, in theory, emulate brain function using multiple neural processors connected in a similar way (with appropriate cross-talk if necessary). [We would probably need to emulate the brain-body neural interface too, i.e. give it sensors and effectors].

If, as the evidence suggests, consciousness is a result of certain aspects of the brain function described above, then, in theory, the emulation could support consciousness.

You're good up to here. Til this point, you're describing the process of building a real replica brain. You could sit this thing on your kitchen counter and literally watch it think.

Now keep in mind, the thing you just built is not conveying information. It's a real physical thing, moving some kind of electrophysical impulses through spacetime.

If you want it to convey information for you, you're going to have to come up with some kind of information that naturally mimics what it's already doing anyway.

Each neural processor can itself be emulated in software, and multiple neural processors and their interactions can be emulated in software; i.e. an entire subsystem of the brain can be replaced by a 'black box' subsystem emulation.

In theory, all the neural processors in a brain emulation, and their interactions, can be emulated in software using a single (very fast) processor, e.g. with multi-tasking, memory partitioning, and appropriate I/O from/to the sensor/effector net.

Given the above, it seems to follow that, in theory, consciousness could be supported on such a single processor software emulation of a brain.

I'm curious to know which of the above step(s) are considered problematic by those who don't agree, and why.

This is where you hit your problem.

When you emulate any real thing in software, the real things it was doing are no longer being done, and instead some very different real things are being done which are only informationally related to what the original system did.

And that informational relationship exists in the brains of those creating or reading the emulation.

So you've turned a real thing into an imaginary one.
 
The mental representation of a tornado was created from real tornadoes. But for the existence of tornadoes, there would be no such concept; furthermore, we look to real tornadoes to provide us with the understanding.

The mental representation of a tornado is merely an intermediary.

Sure, if you define the function that way, but you cannot reverse the function in the real world beyond the idea, which makes the previous tornado useless in terms of the system we're talking about, and therefore makes the expanded function useless.

A human being can only map his imaginary representation of a tornado onto the simulator. He can't map the tornado.

You can derive an imaginary representation of a tornado from observing the simulation, but no tornado will result from it.

The informational output is a brain state, and only that.

The physical output is heat and the state of the machine.

And that's it.
 
ETA: Or think of it another way. Suppose I had two rocks. And I use my sufficiently accurate thermometer to measure temperature fluctuations in both. Would I be able to compare one rock's predictions of the ESM to the other to see if they agree?

I agree that whether it's a simulation depends on whether a human being can make predictions from it, and not whether there is a one-to-one mapping between state changes. That was in fact the point I was making - whether or not it's a simulation entirely depends on the subjective element.

However, if the claim is that it's objective, and nothing to do with human beings, then you can't legitimately exclude mappings simply because they aren't useful or meaningful.
 
I don't know about that. I typically do think of all sorts of physical systems as information processors.

But you have to choose. You can't play fast and loose.

If a star is an information processor, then yeah, the brain is an information processor, too.

If a star is not an information processor, but your laptop is, then the brain is only an information processor when you use it as one -- that is, when you bounce symbols off it to get other symbols.

Consciousness itself cannot be the result of informational computations, however, but only physical ones, because all informational computations overlay physical computations, and are only informational if interpreted -- which is to say, if they change the state of an observing brain in some way.

And if the computations that generate consciousness have to be interpreted, we're back to the homunculus problem.

No, it must be the actual physical computations, not any hypothetical informational overlay, which causes the body to experience.

It's the physical computations that must cause conscious awareness.

So consciousness is caused by information processing, but the kind that stars do, not the kind we think of our laptops doing.
 
I agree that whether it's a simulation depends on whether a human being can make predictions from it, and not whether there is a one-to-one mapping between state changes. That was in fact the point I was making - whether or not it's a simulation entirely depends on the subjective element.
There was no subjective element 4,000 years ago in any human mind that was able to interpret that the rings of trees indicate their age. And yet, there are bristlecone pines alive today that are even older than this, which have been adding rings annually.
However, if the claim is that it's objective, and nothing to do with human beings, then you can't legitimately exclude mappings simply because they aren't useful or meaningful.
I sure can. I can exclude the mapping of bristlecone pine's rings to the age of my cat, because the one does not inform on the other. Even if it happens that a particular bristlecone pine had 3,967 rings, and my cat were 3,967 days old, it would still be erroneous to conclude that the bristlecone pine's rings inform me about the age of my cat. Such situations we call "coincidence".

Now, it is a possible mapping (of the kind you're talking about at least, I surmise). And in this case, it would even map meaningfully, and would even map in such a way as to be true. But it still would be an error to think the two were related.

ETA: Just to be crystal clear, this is what I'm claiming:
  1. The fact that the bristlecone pine has 3,967 rings means that the bristlecone pine is 3,967 years old.
  2. It carries this meaning whether or not it is interpreted this way by a human. Even if no humans discovered this relationship, if the thing had 3,967 rings, the thing is 3,967 years old.
  3. The fact that the bristlecone pine has 3,967 rings does not mean that my cat is 3,967 days old.
  4. It does not mean this, even if my cat were 3,967 days old; such a thing would be a coincidence, if it were true.
  5. The above have nothing to do with "possible mappings" or what's "useful"; it is independent of whether or not this is meaningful to an existing human. Instead, it has to do with the causal relationships of the involved entities.
  6. What we as humans do is discover these causal relationships; we learn where the meanings are by interacting with the universe; we figure out that the rings indicate the years of the bristlecone pines.
 
Last edited:
There was no subjective element 4,000 years ago in any human mind that was able to interpret that the rings of trees indicate their age. And yet, there are bristlecone pines alive today that are even older than this, which have been adding rings annually.

I sure can. I can exclude the mapping of bristlecone pine's rings to the age of my cat, because the one does not inform on the other. Even if it happens that a particular bristlecone pine had 3,967 rings, and my cat were 3,967 days old, it would still be erroneous to conclude that the bristlecone pine's rings inform me about the age of my cat. Such situations we call "coincidence".

Now, it is a possible mapping (of the kind you're talking about at least, I surmise). And in this case, it would even map meaningfully, and would even map in such a way as to be true. But it still would be an error to think the two were related.

ETA: Just to be crystal clear, this is what I'm claiming:
  1. The fact that the bristlecone pine has 3,967 rings means that the bristlecone pine is 3,967 years old.
  2. It carries this meaning whether or not it is interpreted this way by a human. Even if no humans discovered this relationship, if the thing had 3,967 rings, the thing is 3,967 years old.
  3. The fact that the bristlecone pine has 3,967 rings does not mean that my cat is 3,967 days old.
  4. It does not mean this, even if my cat were 3,967 days old; such a thing would be a coincidence, if it were true.
  5. The above have nothing to do with "possible mappings" or what's "useful"; it is independent of whether or not this is meaningful to an existing human. Instead, it has to do with the causal relationships of the involved entities.
  6. What we as humans do is discover these causal relationships; we learn where the meanings are by interacting with the universe; we figure out that the rings indicate the years of the bristlecone pines.

The reason that the tree rings are so informative to us is because they've absorbed so little information since they were formed. Compare it to the same weight of water molecules that melted into the ocean at the same time. They've been absorbing information every instant since. That information is totally useless to us - there is no possible way to interpret it - but any given molecule is where it is due to every interaction over the previous 3,967 years.

That's what we do to make systems capable of carrying information - we isolate them. We carve in stone. We keep our books dry and at room temperature. If we have a CD which has the imprints of four years of college, it probably won't play as well. Our computers are designed to produce unchanging, isolated environments. We want as little information as possible passing back and forth. Only then can we make use of them. A system which involves huge amounts of information being freely interchanged is useless.

There might be a theory which describes how consciousness arises in environments with minimal information exchange, but I've yet to see it phrased like that.
 
The reason that the tree rings are so informative to us is because they've absorbed so little information since they were formed.
I'm perfectly willing to believe that the equivalent weight of water molecules melting into the ocean absorbs more information than the bristlecone pine tree. However, I don't agree that your principle holds. If it did, then it should be impossible to find something that absorbed more information than some other thing, yet informs us more than that other thing. And the 3 and a half inch disk of magnetic material in my terabyte hard drive, when compared with a snowflake, is just such a counterexample.
 
A machine with a computer in it can be conscious. But it must be designed and built to perform that function. Programming alone cannot make it happen.

That's all I'm saying and all I've ever said.
That's impossible. Or to put it another way, that's dualism. Once a computer is Turing complete, there's nothing more to add. Either a general-purpose computer can be programmed to be conscious, or you aren't.
 
But I gave you a good definition of computer -- a collection of particles that can use sequences of computations within itself to keep itself in a configuration where those sequences can be repeated in the future. Essentially, keeping itself the way it is.

You've also said that hearts compute. Again: if everything computes, we need another term which distinguishes computers and brains from other things, if we want to draw a parallel between them. We need to be able to define what that parallel is.

Contrast that with things like rocks and oceans. I think anyone would be hard pressed to come up with sequences of computations in rocks or oceans that could potentially increase the survivability of rocks or oceans. Can a rock be hooked up to turn off the heat so it doesn't melt itself? I don't think so.

I don't think a survival instinct is necessary for computation.
 
Last edited:
Now keep in mind, the thing you just built is not conveying information. It's a real physical thing, moving some kind of electrophysical impulses through spacetime.

If you want it to convey information for you, you're going to have to come up with some kind of information that naturally mimics what it's already doing anyway.
I don't follow what you mean by 'conveying information'. Naturally it's a physical thing - you need hardware to perform the switching & logic operations.

This is where you hit your problem.

When you emulate any real thing in software, the real things it was doing are no longer being done, and instead some very different real things are being done which are only informationally related to what the original system did
...So you've turned a real thing into an imaginary one.
Sounds like some sort of deus ex machina...

I really can't see what has fundamentally changed; we know that the physical implementation of a transform function isn't relevant to the processing of inputs to produce outputs in any other form of computing - a mechanical adding machine gives the same results as an electronic calculator; the 'real things' being done are physically different, but the function achieved is the same. If the electronic calculator or a computer emulates the adding machine in software, what is imaginary? Isn't there still a functional adding machine? The implementation is different, is all.

The original neural processors were already emulating 'real things', by running microcode that translated the instructions for the neuron behaviour into their native instruction set, then executed the native instructions, so there is a level of abstraction between the instructions for behaving like a neuron and the hardware doing it. If a different neural processor chip was used, the same neuron behavior instructions would be translated into different native instructions and executed in a different way by the hardware, but with the same end result - similar inputs would result in similar outputs; the particular physical circuits and pathways used to achieve the transformation from inputs to outputs are not relevant. A Windows application works just the same on my native Intel Pentium box running Windows OS as on the Linux server running a Windows emulation.

When you use a single microprocessor to emulate multiple microprocessors, there is still physical hardware that is performing the same switching and logic operations, but now one piece of hardware is performing the switching and logic operations previously performed by many pieces of hardware. The same functions are applied to convert inputs to outputs, but the overall implementation is different.

Are you suggesting that we must have as many physical processors as there are neurons? Suppose we use a multi-core processor with a core for each neuron? in purely computing terms there is no difference between that and a single core multitasking. Is there something more to a neural subsystem in the brain than producing particular output signals from particular input signals?

If, as you seem to suggest, the physical implementation of the signal input to output transform function is fundamentally relevant to brain operation, then clearly the brain must be in some way fundamentally different to other computing devices we know of; so we can't expect to be able to replace a neuron with a processor that implements that input to output transform function differently, and we couldn't, in theory, replace, say the visual cortex with a single processor black box, that takes the same inputs and produces the same outputs, and still expect to see.

This doesn't sound reasonable to me - have I misunderstood your objection?
 
Last edited:
I'm perfectly willing to believe that the equivalent weight of water molecules melting into the ocean absorbs more information than the bristlecone pine tree. However, I don't agree that your principle holds. If it did, then it should be impossible to find something that absorbed more information than some other thing, yet informs us more than that other thing. And the 3 and a half inch disk of magnetic material in my terabyte hard drive, when compared with a snowflake, is just such a counterexample.

I'm quite happy with a subjective view of information - that which can be accessed by a human being. I'm quite happy with an objective view of information - a record of all the interactions with other objects. It's trying to have both and neither at the same time that presents the difficulties.
 
I'm quite happy with a subjective view of information - that which can be accessed by a human being. I'm quite happy with an objective view of information - a record of all the interactions with other objects. It's trying to have both and neither at the same time that presents the difficulties.
What are you talking about, and what does it have to do with my post?
 
Aw, come on piggy. I can program a simulation with any rules I want. It can have no gravity, for instance, or it can have additional rules.



What ? I think you're very confused, now. Rules. Like laws. Legal laws. I follow some.

You can program your laptop to have no gravity all you want, but try weighing it. All you're doing is imagining something.
 
You've also said that hearts compute. Again: if everything computes, we need another term which distinguishes computers and brains from other things, if we want to draw a parallel between them. We need to be able to define what that parallel is.

I think you are going down the wrong path if you really want to claim that hearts do not compute but brains do.

First, hearts rely on neurons just like brains.

Second, even if they didn't, hearts are full of biological cells, each of which satisfies any definition of "computing" that anyone could come up with. Most of the chemical cascades that occur in cells are just as discrete in nature as the operation of a transistor.

I don't think a survival instinct is necessary for computation.

I specifically said it was not. I specifically said that it just has to be an *option*.

The only reason that is thrown in there anyway is if you don't, there is no way to define "mapping a large set of inputs to a small set of outputs."

Look at a transistor -- even though we think of the voltage spike at the emitter as being a discrete jump, when the transistor switches, in reality it is still a continuous curve. Without some heuristic, there is no switch action. And the only heuristic available that isn't human dependent is ... whether the transistor can be used by some system in order to increase survivability of the owning system.

Just because no system uses a transistor in such a way is irrelevant -- if a system *could* use a transistor like that, then we can be confident that yes the transistor is a switch and yes it does map a large set of inputs to a smaller set of outputs.

NOTE -- I thought of a better metric Belz, see my next post.
 
Last edited:
I just thought of something ( thank you Belz ).

I dunno if anyone else has been following the whole "isomorphism" discussion here, but it just dawned on me that this is the difference between a computer/brain and a rock.

A computer or brain can exhibit behavior isomorphisms to far more systems than a rock can.

Period.

That's why computers and brains make such good simulators compared to everything else.

I think we could arrive at an objective metric along the lines of "how many different things a system can simulate" and base the definition of whether something is a "computer" on that. I imagine there would be a fairly low threshold, that would exclude everything we want it to exclude and include everything we want it to include. Certainly rocks and oceans and hearts would be non-computing, and brains and computers would be computing.
 
Last edited:
Status
Not open for further replies.

Back
Top Bottom