• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Explain consciousness to the layman.

Status
Not open for further replies.
You are on both sides of the river.

I am on one side.

There are no rivers in my head when I think of rivers, there are only representations of rivers, which are not rivers.

There are no tornadoes in the machine, only representations of tornadoes, which is why those representations cause no damage.

Everyone knows that the representations are real.

Some folks, however, forget that they are entirely representations, and never anything else.
 
That's actually what we're talking about.

So now, computers can't be conscious because they're not real, but they'd feel real if they were made to be conscious except they're not because they're computers.

This is circular reasoning.

That bears no resemblance to what I've said.

Computers are real. You can go look at one anytime you like.

Biological machines can be conscious. We know that because we know people are conscious.

At the moment, we have no reason to doubt that some sort of machine could someday be built that would also be conscious. I mean, why not?

To do that, we'll have to build the machine do whatever it is that brains do when they're conscious.

There's no reason why a computer can't be part of that machine.

In other words, the real-world result can involve programming, but it can't be the result of programming alone because no real-world phenomenon is the result of programming alone.

The real-world results of the behavior of the simulation machine, if they're going to be a cause for any real-world event, have to be real, not informational.

For example, the car doesn't get painted because the logic is employed, it gets painted because part of the output of the physical computation that works according to the logic is a set of electrical impulses that power machine arms.

A machine with a computer in it can be conscious. But it must be designed and built to perform that function. Programming alone cannot make it happen.

That's all I'm saying and all I've ever said.
 
That's the whole point of a hypothetical.

Ideally, a what if leads one somewhere. I don't mind exploring what ifs either to discover what the hypothesis leads to, or to discover what needs to be true to bring them about. Just considering that something might be possible in isolation doesn't inform us. The very question under consideration is whether conscious entities can exist in a computer. Supposing that it is true, in isolation, doesn't help us decide that it really is true.
 
Here you're making a few assumptions I don't hold. First, you're assuming that the person is the same as the thing the person is consciously aware of. Second, you're assuming that meaning is produced by conscious awareness.

Yes and no. Let's look at it again:

You might say, "Well, yes, but there’s a difference--the person understands the meaning of the symbols, whereas the machine does not."

And that is correct, but it’s not as significant as we might think, because although the human consciously understands the meaning of the symbols being used, he didn’t consciously come to the conclusion that the right answer was "Five". Instead, it "occurred to him" or "popped into his head".

In fact, he probably had begun to say the word "Five" before he was consciously aware of thinking "Five".

Note that I say nothing about "the person" except in the hypothetical objection I'm addressing. Instead, I talk about what this lump of stuff we call a human being is doing.

Some of what this human's brain is doing is involved in producing a sense of self and experience (which is one way of defining "the person", but not the only way) and a lot of it is not.

And it makes perfect sense to say that the human in this example "understands the meaning" of the transaction consciously and non-consciously (the non-conscious bits of the brain got the answer right, didn't they?) but note that I was addressing a rhetorical question: "Well, yes, but there’s a difference--the person understands the meaning of the symbols, whereas the machine does not."

Generally, that sort of question refers to what we consciously understand, and it was that objection that I was addressing, and ultimately rejecting as an objection because the issue of whether or not we consciously understand what our brains are doing is different from the issue of what our brains are actually doing.
 
The main problem here, I think, isn't that we have no good word for consciousness. In fact, I think this notion is contradictory. The problem, instead, is that the word that we do have is too vague--it does not precisely define what we have.

The reason that the "word is too vague" is simply that we haven't yet figured out how the process works. The word will become less vague as our understanding improves.
 
I know all that.

But the phrase "the brain is a computer" is useless since rocks are now computers too. "The brain behaves like an electronic computer" would be closer to what we want to say, only it doesn't really behave like that.

So, again, if the word "compute" means "changes state", why don't we use that, instead ? And second, how do you describe the behaviour of the brain, then ?

I was under the impression that to call something "computation" it had to meet some more criteria than "changes state". Now, don't misunderstand me: I'm arguing the use of the word itself, not whether a computer or simulation can be conscious.

When we use words, it's important that the context is known. We use "compute" quite correctly, meaning something specific quite different to "change state". If, however, we are describing the objective behaviour of a physical system, then we have to be considering the changes in physical state of that system. What else can we consider?

If we want to discuss how to generate the first 100 prime numbers, we can use the language of computation without considering at all how a computation comes to physically exist. If we want to discuss how the physical phenomenon of consciousness comes to exist, we have to do it in physical terms.
 
Can Consciousness Be Calculated?
...
Which means that if consciousness were the informational output of a physical calculation, it would require an interpreter in order to understand it as such, which we have not got.

We're a planning machine. Don't stop at the inputs and outputs--we move and interact with the environment as well, and can perceive its inputs and outputs. It's as if not only we are the information processors processing information from the environment, but we're treating the environment as an information processor processing information from us.

And part of what we look at in the environment, and part of what we study the effects of, is the result of our interactions with the environment. This is where meaning comes from. So, yes, we do have it (in fact, I find it absurd to suppose we don't have it). And no, it doesn't require the conscious mind. I would not only be surprised if meaning came from non-conscious processes, I would expect it.

Surely we're aware of meanings. But that doesn't mean that we "aware" them up.

Sure, but that doesn't change what I'm saying.

The question is, if consciousness is a result of computation, what sort of computation, and what sort of result?

We know that consciousness is the result of the electro-physical activity of the brain, which is a kind of physical computation, the sort of computation also done by oceans, stars, and hurricanes.

This physical computation includes all sorts of input in the form of stuff bouncing off our heads, like light and soundwaves and chemicals and such, triggering physical computations that are part of the mix inside the brain.

This physical computation involves making represenations of all sorts of things, from concrete objects such as walls and doors to intangibles such as kin relationships.

Most of these are not involved in the specific processes (physical computations) that result in conscious awareness, or at least most of the time they aren't.

But what can we say about what kind of output consciousness is, what sort of output of the physical computations of the brain? Is it a real output, like the heat your computer emits? Or is it a symbolic output, like the idea represented by the string "Sustained wind speed: 120 mph"?

Since all symbolic outputs of a system of physical computation require an interpreter -- and an encoder as well, though we've ignored that part -- because the symbolic outputs are a subset of the set of real outputs which are changes to states in a perceiver's brain, and are nothing more than this, then a real-world phenomenon such as conscious awareness can only be a real output of the physical computations of the brain, and cannot be a symbolic output without positing a homunculus to interpret it.

The interaction of the animal with its environment has no impact on this set of facts about its brain. It would be true of a conscious animal or machine that had no input from the environment beyond the brain at all.

Consciousness is as real an outcome of the physical calculations of the brain as the heat coming out of your computer is a real outcome of its physical calculations.

And merely simulating real events does not create replicas of those events, it only changes the shape of the simulating machine.
 
Nor is it "the way it works" to berate someone for failing to get a point without even trying to say what that point might be.
Tu quoque logical fallacy.

I'm correcting you, not berating you. You are doing something wrong, and I'm calling you out for it. I get to do that.
What is "the point"?
It is exactly what I said it was explicitly in that post. You're insisting that what your opposition means by real is what you mean by it.
What is it that "my opposition" is saying that I'm getting wrong, and what is the correct expression of it so that I may get it right?
They are saying that the simulated entities are real in the same sense that you just agreed with me that they are real. They are not saying that if you create a simulation of a particle, you violate a law of physics. If you don't believe me, I dare you to point out a law of physics that they're explanation breaks.
 
Last edited:
That is ONLY because it is easier to use processors to emulate real hardware. Using a processor instead of Opamps enables us to TWEAK the parameters and BEHAVIOR a lot easier than REWIRING different Resistors and Capacitors to change gain and the TRANSFER FUNCTIONS of the neural nodes while in the R&D stages.

But if we already know the Transfer Functions we can just build the system using ENTIRELY electrical components with NO SOFTWARE WHATSOEVER.

But to do that with the tremendous number of Neurons and interconnections needed to reach the required Critical Mass might be quite a daunting task.
It would be an engineering problem.

That is why it is often easier to just SIMULATE a NN on a computer.
Simulate or emulate?

BUT...BUT.... have a look at the LAST paragraph in my post to see why that a normally functioning NN might not even be enough. :confused: :confused:
A 'normally functioning' NN? If neuronal cross-firing is relevant and/or necessary to the emulation, it too could be implemented.

Precisely.....:thumbsup:
We would program such a system indirectly, feeding it relevant input so that it could learn and organise itself over time, much as a biological brain does. The direct programming would be a layer of abstraction below, and would involve programming the way the system learns and organises itself. If the individual neurons were being emulated in software rather than hardware, their programming would be a level of abstraction below that.

ETA: :confused: it seems you have deleted your post after I quoted it :confused: Sorry.... but I like what you said ...why did you change your mind :(
Ennui. I couldn't see it making any difference, given the entrenched opinions here. But seeing as you responded, and I've got some spare time...

I can see that simulating or emulating a power station in a computer will not generate real power, but I think it's a red herring. Information processing is qualitatively different - it is functionally indifferent to abstraction. A physical computer can run an OS running a program that runs Conway's Game of Life itself running a UTM implementation (Universal Turing Machine), which can run arbitrary TM programs. A cellular automaton may be a clumsy way to implement a UTM, but the UTM it runs is a 'real' programmable computer - it may be virtual and several levels of abstraction from the processor hardware, but it can really process data.

Where I worked, we had a big Linux server box that emulated multiple virtual machines at once. You could configure each virtual machine to run a different OS emulation. Users could run their OS-specific applications on it as if on a native machine. They could even run their favourite OS on a suitable microprocessor emulation running on this server. Were these real Windows, DOS, Unix, etc., machines? Real microprocessors? No, they were virtual emulations - but they behaved exactly the same as the 'real' ones. They gave exactly the same outputs for any given inputs as the 'real' thing.

The way I see it it looks to me at present is:

If, as the evidence suggests, a neuron is a sophisticated information processor, taking multiple input signal streams and outputting a result signal stream, we can, in theory (and probably in practice), emulate its substantive functionality with a neural processor (e.g. a chip like IBM's neural processor, but more sophisticated).

If, as the evidence suggests, brain function is a result of the signal processing many neurons with multiple connections between them, we can, in theory, emulate brain function using multiple neural processors connected in a similar way (with appropriate cross-talk if necessary). [We would probably need to emulate the brain-body neural interface too, i.e. give it sensors and effectors].

If, as the evidence suggests, consciousness is a result of certain aspects of the brain function described above, then, in theory, the emulation could support consciousness.

Each neural processor can itself be emulated in software, and multiple neural processors and their interactions can be emulated in software; i.e. an entire subsystem of the brain can be replaced by a 'black box' subsystem emulation.

In theory, all the neural processors in a brain emulation, and their interactions, can be emulated in software using a single (very fast) processor, e.g. with multi-tasking, memory partitioning, and appropriate I/O from/to the sensor/effector net.

Given the above, it seems to follow that, in theory, consciousness could be supported on such a single processor software emulation of a brain.

I'm curious to know which of the above step(s) are considered problematic by those who don't agree, and why.
 
And it makes perfect sense to say that the human in this example "understands the meaning" of the transaction consciously and non-consciously (the non-conscious bits of the brain got the answer right, didn't they?)
Okay, then I'm confused about what you're talking about. So an information processor requires interpretation of results. But the non-conscious mind acts like a computer. And the non-conscious mind is capable of generating meaning, and interpreting results.

So what then is your objection to the computational theory of consciousness?
 
You're insisting that what your opposition means by real is what you mean by it.

Good God almighty.... Hence my reply to the post asking if we can dispense with "objective reality" as redundant.

If "my opposition" means something by "real" other than "real", then why don't they use a more apt term?

They are saying that the simulated entities are real in the same sense that you just agreed with me that they are real. They are not saying that if you create a simulation of a particle, you violate a law of physics. If you don't believe me, I dare you to point out a law of physics that they're explanation breaks.

The problem seems to be that you don't take PixyMisa at his word.

To create an honest-to-God for real particle, all he needs is a computer and logic. At the end of the process, he'll have the computer and a new "real" particle that didn't exist before. And he'll do it with only enough energy to change the state of the computer.

That violates the laws of conservation.

And any scenario that contemplates the points of view of characters in a simulation, while denying that no such character -- not the pattern of machine activity, but the character it represents -- exists outside of the imagination of the reader of the simulation, but insisting instead that the character will actually have a conscious experience like we have... well, that scenario demands some sort of objectively real existence for the symbolic outputs of the simulation, which is merely an absurdity.

By definition, the symbolic outputs are physical states in the brain of an interpreter.

The brain states are real.

So are the other outputs of the system, such as the physical state of the simulator, and the heat it emits, and any light or sound the system emits, and so forth.

But the entities which are intended to be symbolized by simulation... they have no claim to any sort of non-imaginary existence.
 
Okay, then I'm confused about what you're talking about. So an information processor requires interpretation of results. But the non-conscious mind acts like a computer. And the non-conscious mind is capable of generating meaning, and interpreting results.

So what then is your objection to the computational theory of consciousness?

Right now, there is no computational theory of consciousness. There is a computational model of mind, but since we don't yet know how the brain consciouses, we don't have any clear theory about it, just a handful of facts and the nose end of some hypotheses.

But it's not the computational model I object to. It's the misuse of the model to conclude that consciousness can be programmed into a machine that's not in any other way built to be conscious, and that you could build a conscious machine by simulating the brain's activity using ropes, or that a machine could be conscious at any operating speed, or that you can run simulations of brains and this will create a conscious machine, and so on.

That's what I object to.
 
You're begging the question. Which process?

Whatever process makes the world take form when you wake up in the morning, makes you aware of nothing after you fall asleep, then makes dream worlds take form during the night.

That process.
 
What do you mean "name" one? Why would such a thing have a name?

Are you telling me that the physical activity of the simulator machine cannot be correspond to any other hypothetical system at all? All the components must correspond and can only correspond to a watershed?

I know you believe some weird things, but surely you don't believe that.

I am telling you that the physical activity of the simulator machine, when it is simulating a watershed, is ismorphic with the physical activity of a watershed ( or anything else that is isomorphic with the physical activity of a watershed ).

If the physical activity of an automobile is not isomorphic with that of a watershed to begin with, then the activity of the simulator machine cannot be, by definition, isomorphic with the automobile either.

Just because an intelligent entity can find a mapping between the initial state of the simulation and the initial state of an automobile doesn't imply that the activity of the simulation and the automobile is isomorphic. That is where you are getting confused, I think. Yes, we can find a million things that might map to the initial state of the simulation -- so what? Once the simulation is running, those mappings instantly become invalid.

Except for one -- the mapping between the watershed and the simulation. That mapping remains valid the entire time, which is why the activity of the two systems is isomorphic.
 
Whatever process makes the world take form when you wake up in the morning,
Perception, but that occurs while I'm asleep as well. You could also be talking about the integration of perception into a world, but most of that happens outside of my awareness. Are you counting that process as consciousness?
makes you aware of nothing after you fall asleep,
I take it you mean to contrast consciousness with this state.
then makes dream worlds take form during the night.
I thought non-conscious processes make the dream world--I'm just aware of it while dreaming.
 
The problem seems to be that you don't take PixyMisa at his word.

To create an honest-to-God for real particle, all he needs is a computer and logic. At the end of the process, he'll have the computer and a new "real" particle that didn't exist before. And he'll do it with only enough energy to change the state of the computer.

Nope, you are completely misunderstanding the argument we are presenting.

We are creating an honest-to-god for real isomorphism to a real particle.

After which, any property of the real particle that survives the isomorphism is also present in the mirror system, honest-to-god for real.

For example, things like cause and effect. If the isomorphism is expressed as I(), and behavior A causes behavior B of the real particle, then I(A) will cause I(B) in the "computer and logic," and that is a real causal sequence. As real as it gets.
 
Last edited:
If we look at the possible state transitions of a physical system. Those states are not fixed - in the sense that a switch can only be on or off. They are subdivisions of the physical condition of the system, and can be allocated in any arbitrary way. If that is so, we could take any physical system, and consider its possible state transitions, and apply the state partitioning on any other physical system, choosing the possible states so that one mirrors the other. Given the hugely rich changes of state in most reasonably large physical systems, almost every system can be considered to be simulating every other system, at some level of granularity.

This is not an especially interesting or useful observation, unless one has the idea that each of these simulations is a world in itself, regardless of observers.

To take a rock and "map" its state transitions such that they simulate a tornado requires mapping the entire transition space of the states in the tornado to specific states in the rock.

To take a computer simulation of a tornado and "map" its state transitions such that they simulate a tornado requires mapping only a single transition.

That is why we use computers to simulate tornadoes instead of using rocks to simulate tornadoes.
 
Status
Not open for further replies.

Back
Top Bottom