• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Explain consciousness to the layman.

Status
Not open for further replies.
Eh? the inputs going into such a black box are not physical calculations, they are just modulated signals, e.g. electrical pulses. That apart, would you accept such a black box replacement part for, say, the visual cortex (assuming we could handle all the necessary inputs and outputs)?

Electrical pulses are physical calculations. Anything that changes state according to the laws of physics is a physical calculation, because those laws are the rules, the universe provides something which can have a state, and time requires (or is) the changing of those states. Which makes the physical universe a computer.

A black box replacement for visual cortex is a trickier proposition than it might seem at first.

If that black box is to accept the same inputs, which is to say the same physical computations... and note that we cannot snap our fingers here and substitute logical computations... and emit the same physical computations as outputs... and if this is to be done in a densely tangled mesh of finely interconnected neural tissue that's warped into all kinds of shapes that we know have tremendous impacts on the brain's behavior if disrupted (not to mention the almost entirely unknown role of brainwide electrical waves)... well, it becomes difficult to see what you might put into that box if not the cortex itself.
 
Could you please tell me what "real addition" is?

Already have.

It's like when I bring my 3 dogs over to your yard and let them loose with your 2 dogs.

It's precisely because things get added to and subtracted from groups in the real world that we have the kind of brain which thinks in terms of abstract addition and subtraction... and goes on to invent machines to help it do that incredibly fast.

Our brains evolved to think in those terms because our environment shaped them that way. Literally.

Our brains evolved equipment to help us respond to differences in number, because brains which could do this had a better chance of making more brains like them.

Certain instances of changes in the number of a group can be described as addition. That's real addition.

Like everything else we know of in this world, it can be represented symbolically, in any number of ways.
 
It might be like a computer, but that doesn't mean that it would be like a computer program.
Not sure I understand what you mean - the computer is generally the hardware; processor, RAM, I/O ports, wiring, etc. The computer program is the software that runs on that hardware (or that the hardware executes if you prefer). The software (computer program) in this case would be the code for the virtual neurons and the code that manages the connections between their inputs and outputs and the overall timing & synchronisation, etc.

In particular, the type of computer program typically specified for artificial intelligence.
I didn't know there was a type of program typically specified for AI programs - what type of program would that be, in particular?

A man with an artificial limb, for a start.
O. Well, a computer consciousness wouldn't be a cyborg unless you gave it some biological body parts, and why would one do that?
 
So, in theory, we can have a brain emulation running on a single physical processor with memory, and apart from the I/O subsystem, everything else would be software or data.

What exactly do you think the I and O of such a system would be?
 
I took you to mean that at some point a computer simulation of a physical entity was taking the place of that entity within a system.
I meant that a physical neuron or neural processor was being emulated in software - I've explained all this in detail in earlier posts.
 
Not entirely. As you yourself said, you can replace a subsystem with a black box where the real inputs going into it and coming out of it are identical to the physical system you want to replace; and there are certainly neural circuits at the lower levels (e.g. on the order of neural columns), whose well-defined contribution could be replaced by a component that isn't based on neuron emulation.

Well, you're right, and I'm not gonna argue with the fellows trying to build it either.
 
Already have.

It's like when I bring my 3 dogs over to your yard and let them loose with your 2 dogs.
This is incomplete. Which of the following is real addition?
  • When you bring your 3 dogs over to my yard and let them loose with my 2 dogs, and there winds up being 5 dogs?
  • When your dogs enter my yard?
  • The relationship between the quantity of your dogs and the quantity of my dogs with the quantity of the dogs combined?
  • All of these? Some of them? Something else?

Certain instances of changes in the number of a group can be described as addition. That's real addition.
I think this is a bunch of hoopla. The fact that your bringing three dogs into my yard when I have two there results in five dogs is addition. Anything which correctly describes that your three dogs plus my two dogs winds up having five dogs describes the same exact situation. It's the same extension.
Like everything else we know of in this world, it can be represented symbolically, in any number of ways.
So what makes one the "real representation" and the others not?
 
piggy said:
what he's trying to explain to belz is simply that if there were any conscious entity generated by the activity of the simulating machine, it would live in the same world we do, not in some other world... It might just have a different sort of experience of that world.

But one thing's for sure... If the behavior of a machine caused any incidence of conscious awareness, and that machine were also running some sort of simulation of another world, the world that its behavior is intended to simulate would not be the world that the conscious entity would be aware of.

It cannot be, since there's no information in the system that could possibly identify such a world out of the infinite variety of possible worlds that could be described by associating symbolic values to its state changes.

And even if there were, there is no mechanism for making that imaginary world into the world where the conscious entity exists.
ok; let's suppose this conscious entity has been developed and brought to conscious awareness of our world through a link to humanoid robot with basic movement and senses. The entity experiences the real world through the senses of this robot.

I have to stop you here.

First of all, we're going to dispense with the robot, simply because the word "robot" has a lot of unnecessary baggage which (I can promise you) would hinder discussion.

So let's just call that thing a "machine".

Let's suppose this conscious entity has been developed and brought to conscious awareness of our world through a link to a machine with basic movement and senses. The entity experiences the real world through the senses of this machine.

OK, we've solved one problem, but there are others.

The largest of these is that you propose the existence of a "conscious entity" that has been "brought to conscious awareness of our world" by some sort of "link" with a machine.

So you've just whipped this "conscious entity" up out of even less than thin air (no energy or mass appear to be involved) and then procede to make it aware of "our world"... at which point you also invent "its world" necessarily by contrast... by means of an unspecified "link" with a machine of nondescript construction.

This is just the first of the problems.
 
Electrical pulses are physical calculations. Anything that changes state according to the laws of physics is a physical calculation, because those laws are the rules, the universe provides something which can have a state, and time requires (or is) the changing of those states. Which makes the physical universe a computer.
OK, I see what you're saying, but calling every event in the universe a calculation is a hopeless obfuscation in this context. I'm not going to play that game.

A black box replacement for visual cortex is a trickier proposition than it might seem at first.
I chose it for it's complexity :)

If that black box is to accept the same inputs, which is to say the same physical computations... and note that we cannot snap our fingers here and substitute logical computations... and emit the same physical computations as outputs... and if this is to be done in a densely tangled mesh of finely interconnected neural tissue that's warped into all kinds of shapes that we know have tremendous impacts on the brain's behavior if disrupted (not to mention the almost entirely unknown role of brainwide electrical waves)... well, it becomes difficult to see what you might put into that box if not the cortex itself.
Well that's why it's a thought experiment - we don't have to worry about all the practical difficulties. But just for fun, the box doesn't have sit inside the skull - it can be arbitrarily large - and it can be connected via fine probes. The signals between neurons are very slow compared to electrical currents in wires.

So, assuming the practicalities could be overcome, do you think it would integrate with functional transparency? Would the patient see with the black box installed?

If so, how much brain functionality do you think could be replaced with black boxes in a similar way? What about extending the scope of the original black box to replace more of the brain?

What about replacing the whole brain with a black box that takes sensory input and outputs motor activities just like you or me? :D

I've rushed ahead here - I'm curious to know where the line should be drawn. If we can replace whole subsystems with black boxes that, for the same inputs, give the same outputs as the biological subsystems, how much can we replace without 'breaking' consciousness?

My bet is that only a limited number of subsystems could be replaced that way.
 
That is, again, and unsurprisingly, irrelevant.

The "real" laws of physics may not change, but as far as the simulated entity can ever perceive, its laws of physics are different and, from its point of view, they constitude its reality. Of course the "real" world doesn't change, but why would you mention that in the first place, unless you completely fail to understand what I'm talking about ?

What I am denying is that the situation is any different for any (conscious) entity. There's the world as perceived, and reality. I'm saying that the situation is no different for the entity in the computer. The distinction between the artificial consciousness and the human being is of degree, not kind.

I still don't know if you agree or disagree with this. It might be a useful fiction to refer to the "virtual world" in which the artificial consciousness lives, but it is just a fiction. There are people proposing that the virtual world has an existence of its own. I don't accept this.
 
First of all, the number of possibly simulated systems must be infinite, because every combination of particles could be simulating at least one complete system, and an infinite number of larger but incompletely represented systems.

It may be a finite number, due to quantisation, but if so, it's such an enormously big number that it might as well be infinite.

And your use of "computational system" here is what's getting you into trouble.

You aren't distinguishing between physical and symbolic computations, which at this stage of the conversation is necessary. (Well, it always is, really, but especially now.)
 
Piggy, you never responded to this post of mine.

Sorry, I don't know why I didn't see it....

I would like you to explain how in I(O) I(A) causes I(B) without in S(O) S(A) causes S(B). That is, how can an information overlay contribute to causality?

My point is that in all cases it IS actual machine parts that are involved in the causal sequence, even if a given observer needs an informational overlay to see it.

But yes, I am speaking about the machine parts. The transistors of the computer.

I'm not claiming that an informational overlay contributes to causality.

I'm observing that it doesn't.

If it did, it would muck things up. It's possible to do that, you know. If the parts (for instance, research subjects) do incorporate the informational overlay (for example, come to find out exactly what's being measured) then you've introduced such a causality, and it can complicate things to the point of having to scrap it all.

I agree with you that O(A)->O(B) in both systems, or else it's not a simulation.

But it's important to keep in mind that we're talking about discreet systems, one of which might be purely imaginary to begin with (if you're simulating a fantasy world, for instance) or in other words a state of someone's brain.

We can think of these two systems as a pair of identical twins, Pete and Repeat, and Repeat has been trained to behave exactly like Pete, even when they're apart.

As long as Pete doesn't go through anything that changes the way he acts, we'll be able to look at Repeat and know what Pete is doing.

But let's take a look at that claim.

On the surface, it seems like we're claiming a real connection between Pete and Repeat. But this doesn't exist. Pete and Repeat are each behaving according to their own physics, they've just been set off into similar patterns.

The real connection is in my brain, which knows that Repeat and Pete are behaving in sync in one of many possible ways, and that therefore I can look at Repeat and know something about Pete.

And I do mean real. It exists as a physical shape in my brain.

In fact, this is what enables me to look at Pete and Repeat and conclude that something's gone wrong with Repeat's behavior.

But without that bit of knowledge that can only exist in the brain of the programmer and user -- which is to say, the knowledge that Repeat is supposed to act like Pete, and not the other way around, or that it's all just a freakish coincidence, or that they're both acting like someone else -- then the similarities between certain aspects of these 2 systems isn't anything but that.

This is why Repeat (or anything else) can only be an information processor if used as one, not by virtue of physical design. "Info processor" is an imaginary rather than real class of object, which means it's one if people intend it to be one or use it as one.

So we're right back to the brain of the programmer and user. That's the only location of the connection between the two systems which makes one a simulation of the other.

So now answer this:

If there is a simulation of a neural network running on a computer, and in the real neural network a neuron fires due to the integration of signals from other neurons, is there not an isomorphic causal sequence that takes place in the transistors of the computer? And isn't that causal sequence in the transistors of the computer just as "real" as the corresponding sequence in the actual neural network? Meaning, isn't something like "voltage from transistor X caused transistor Y to switch" just as "real" as whatever happens in the neural network?

Yeah, the voltage changes are as real as the neural firings.

But the similarity between these changes is only significant if you know that one is supposed to symbolize the other.

That's so important, it bears repeating:

The similarity between these changes is only significant if you know that one is supposed to symbolize the other.

And because the symbolic changes, which is to say the logical computations or "information processing", depend on this understanding of the similarity, it makes no sense to talk of info processing (in this sense) or logical computations without an observer.

It only makes sense to talk of physical computations without an observer.

And it only makes sense to talk of logical computations as imaginary (which is to say, changes in brain states).
 
http://en.wikipedia.org/wiki/Arithmetic_logic_unit

The digital logic that occurs when performing an arithmetic operation on two bitfields is very specific -- it is a dedicated portion of the hardware. Not only is the ALU different from the rest of the hardware, but the portions of the ALU responsible for different arithmetic operations are themselves very distinct.

Any intelligent entity can look at the way the gates are set up and see that they correspond to specific arithmetic operations -- addition, multiplication, and division.

Arithmetic isn't some "generalized" computation that happens in a "generalized" computer part.

So you agree with me that the correspondence must exist in the mind of an observer.

Thank you.
 
The "real" laws of physics may not change, but as far as the simulated entity can ever perceive, its laws of physics are different and, from its point of view, they constitude its reality.

Stop!

At this point you have already assumed that there is a simulated entity which has perceptions and a point of view on reality.

But the entire point of this conversation is to determine if any such thing could exist.

You can't argue that something could exist by assuming it exists and asking what the world looks like from its eyes.
 
I meant that a physical neuron or neural processor was being emulated in software - I've explained all this in detail in earlier posts.

The details don't matter.

If you're replacing a physical neuron, you need to replace it with something that has a real (physical) output that performs the same work as the physical output of the original physical system.

Note that I don't refer to logic here, but work. Logic alone gets you squat. Logic applied to the wrong materials, or in the wrong way, gets you something, but not what you want.

The problem with emulating a physical neuron in software is that your software emulation is useless... that is, unless the physical apparatus running it can also take the physical input of a neuron and convert it into the physical functional equivalent of the output of a neuron, which is to say, the kind of physical output the next neuron will accept.

And if it can do that, it doesn't matter whether or not it also bothers to run a simulation of the neuron or anything else. It can skip that step.
 
For anyone who thinks logic is going to replace physical work, try to logic your way through your front door instead of using your key.

No kidding, not being metaphorical, try it. Seriously, try to replace real work with logic alone.

I guarantee you the only solution you come up with to replace that key is going to involve some real work.

Not a representation of that work expressed via another medium, but the actual work itself.

Perhaps a representation of the work will open a representation of the door for a representation of you in the world of the representation.

But if that happy world exists, it still will not have opened the door for you.
 
OK, I see what you're saying, but calling every event in the universe a calculation is a hopeless obfuscation in this context. I'm not going to play that game.

This is not a game.

Some of the core problems here arise precisely from conflating the physical computations with the symbolic ones.

I'm not bringing this up to try to create a smoke screen.

I'm bringing it up because staying clear on the difference between these two meanings is absolutely crucial to not making errors in logic about all this.
 
Well that's why it's a thought experiment - we don't have to worry about all the practical difficulties.

No, I'm sorry, but that's not correct.

If you conduct a thought experiment about a ladder, the ladder shouldn't be covered with scales and eating bugs in a lake somewhere.

These things which you call "practical difficulties" are real features of the system which are known by direct observation and experiment to affect the behavior of the system.

You don't just get to ignore them.
 
What about replacing the whole brain with a black box that takes sensory input and outputs motor activities just like you or me?

Uh huh.

And what about replacing my whole truck with something else that does exactly what my truck does?
 
Status
Not open for further replies.

Back
Top Bottom