• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Explain consciousness to the layman.

Status
Not open for further replies.
And you don't have to actually have 2 of anything or 3 of anything to do it. In fact, there's no way to tell from the behavior of the computer that anything was "added" at all. The display of the symbol really is all that occurred, and all the machine is set up to do.

You literally don't know what you are talking about piggy.

Read a book on computer architecture and get back to us when you know something.
 
The sound waves hit his ear, cause cascades of electrochemical reactions, which are in a pattern that sets off a rather narrow but strong chain of responses, and based on a kind of neurological erosion, his mouth says "Five" and probably sometime as he hears himself say it, he becomes aware -- there is currently no theory how -- not only that he's saying "Five" but that he means "five".

Which is where it gets interesting.

What is also interesting is that when two rocks tumble into your back yard, where there were previously three, nobody thinks that the yard, or the rocks, or the universe are doing addition in the same sense that we are. However, due to the way data is presented, we can't help thinking that the computer is. We can't help thinking that there's something there beyond what the falling rocks provide. Then an entire branch of study is created to post-justify the intuition.
 
No, it is true... what he's saying is that computers are very useful as information processors because they are so plastic and predictable and fast. They can be made to mimic all kinds of rule-dependent systems, in ways that most rocks can't.

Ditto for brains. After all, what are our imaginations if not simulations we play for ourselves?

It's certainly true that computers are tools which in association with human minds can perform calculation and analysis that the human minds can't do by themselves - hence the belief that they are doing it when the humans aren't involved.
 
Why should it imply that? we don't know the resources needed to maintain a simulation complex enough to support consciousness, we don't know it's even possible or practical in practice, whereas we know that what we believe to be the 'real' world has no trouble supporting consciousness.

It depends on which theory you follow. If you accept that we don't know what it is, precisely, in the brain that causes consciousness to emerge, then we can't quantify how much processing power is needed, even if we accept that some form of computer simulation will do the trick.

However, it's been asserted many times, on this thread and elsewhere, that the operation of neurons is fundamentally digital. It's hence possible to quantify the resources needed for a simulation which would have the same level of complexity. If we allow computer speeds and storage to continue to follow Moore's Law, we will very quickly arrive at a situation where a consciousness could be stored with orders of magnitude less resources needed than a human mind.

I do not accept that this is proven or likely. However, the computational theory has implications - and I've given some of them.
 
I'm going with Kurzweil's view. Assuming no global catastrophes, intelligent self aware machines will become a reality and we will enter the Age of Spiritual Machines.
 
What is also interesting is that when two rocks tumble into your back yard, where there were previously three, nobody thinks that the yard, or the rocks, or the universe are doing addition in the same sense that we are.
What do you mean? This is most obviously a direct mapping to addition. When you say "nobody", are you simply projecting? Do you really just mean, "not westprog"?

What's really interesting is that you'd assert that nobody thinks this thing, rather than bothering to ask someone if they do.
However, due to the way data is presented, we can't help thinking that the computer is.
Given that you got your claim about nobody considering the rocks rolling into the back yard as addition wrong, it follows that this statement is also wrong.

The rest of your post I simply cannot understand.
 
Last edited:
Impulses coming from the eyes, way upstream, if they're in a pattern that's like patterns that tend to be reflected off of faces, will come to an electro-physical neural juncture where by virtue of their shape they will fit into a corridor which other kinds of shapes don't fit into.

So to speak.

I like to imagine it like water lapping on hardpack sand, with the sand and the water mutually shaping each other in intricate interlocking channels, but hey, to each their own.

But anyway, due to the biological shaping of evolution, molding critters like water molds stone, the brain has quite literally become neurally shaped so that patterns like those reflecting off faces (whether they are or not) end up triggering cascades down neural pathways which other patterns don't set off.

And the laws of physics are the only rules in operation.

What they do in this case is to set off a cascade of neural activity directly to areas that set off cascades that result in emotional responses.


Thanks Piggy for your posts over the last few days. I was heading in the same direction but hadn't thought about it enough before to explain it as thoroughly and methodically as you.

I have quoted your passage above because I have become aware of the process you describe going on in my own head over a number of years now and your analogy of water lapping on sand is going straight in my library.

I have an experience where I am aware, like when you imagine something in your minds eye, of a piece of information coming into my brain and cascading along an established pathway. The cascade looses bits to memory recognition receptors on route as its path becomes more refined. It ends up in a space rather like a library where it goes to a precise position on a shelf and lands there.

I know this library backwards and through my filing system I can work out exactly what the piece of information is from where and on which shelf it is.

Actually this place is not a library, it is infact a landscape or garden which I have shaped over my life time from my experience and knowledge I have come across which I am inclined to work with. My preferred architecture of mind.
 
Last edited:
I'm going with Kurzweil's view. Assuming no global catastrophes, intelligent self aware machines will become a reality and we will enter the Age of Spiritual Machines.

Yes this is the future (although I would omit the word spiritual which is a loaded word).

In my previous post I mentioned the "architecture of mind", this is the future as was realised in the past by people who trained and disciplined their minds through meditation and contemplation.

Once the architecture of intelligent machines is established we (or perhaps they) will move on in leaps and bounds.
 
Last edited:
I'm going with Kurzweil's view. Assuming no global catastrophes, intelligent self aware machines will become a reality and we will enter the Age of Spiritual Machines.

At least your an honest Kurzweil disciple unlike the other computational metaphysicians on this forum.

The fact that you also believe in ufo's reveals the basis of computational metaphysics.
 
You keep repeating the same assertion as if that's all you need to do. If this world is indeed created as a quantum bubble in a lab, that is our reality. We don't get to choose what reality actually is. Any scientist will readily admit that his version of what the universe is not what it actually is. There aren't multiple worlds depending on what our level of access is.

You're actually agreeing with me, here.

Sometimes I don't get what people are arguing.

Clearly.
 
A simulation is nothing more than a set of physical computations, which is to say some sort of matter and/or energy changing states from moment to moment.

Which is an obtuse way of saying that simulators are objects.
Sure.

When we run a simulation, what literally happens is that a machine changes state. In other words, an object does what it does.
Sure.

The only reason it's a "simulator" is because we've set up the way it changes its state (which is to say, the relative position of all its parts) so that a portion of those changes mimic the changes in another system with enough accuracy that we can fast-forward the changes of the simulator system and see what's going to happen to the other system -- which could be real like an engine or imaginary like a fantasy world.
Sure.

What's important here is that there's no information in either the simulator or in the thing we want to simulate to indicate that the other exists. You can examine each one all you want... you'll never, as a result, be able to figure that out, just like you can't look at a person and decide if they have a twin.
Sure.

Of course, we could use natural simulators. Like if I only cleaned my house when the moon was full, you could use the moon phase to predict how clean my house would be if you wanted to visit.

Notice that the moon is unchanged by our using it as an information processor, and there's no way anyone can know we've made it into one except by us telling them... which means that its status as such is purely imaginary.
Informational. Not imaginary.

There are a lot of things like that. For instance, a coaster. If it was made to be a coaster, or if someone uses it like one, then it's a coaster. If not, then it's not. There's no coaster molecule. It's an imaginary class of object.
It's a category. Categories are conceptual.

Anyway, we use computers as information processors because we can dictate how they change states, so we can mimic all sorts of stuff.

But here's the thing... the physical universe has no idea which objects we decide to call simulators or information processors.

Consider that.
Sure.

If you believe that the behavior of the physical computations in the simulator machine creates a new world because it mimics a world that its behavior corresponds to... and if we accept that the physical computations in the simulator machine are the same as all others elsewhere (to deny it is to cast aside all physics) -- which is to say, the machine obeys the same laws of physics as every other object in the universe -- then you must believe that the physical calculations (real behavior) of every group of particles in the universe generates an infinite number of new worlds.
No. That doesn't follow. First, it completely ignores the statistical mechanics of each system. The key factor of a computational system is that it can exhibit behaviour starkly different from the bulk properties of the material involved. Otherwise your brain would be nothing more than a sponge.

Second, you just pulled "infinite" out of your ass.

But think about that. Accept the one, you must accept the other, or live by your own metaphysics.
No, Piggy, you're just wrong.
 
Right, but keep in mind they're only "logic operations" in your mind. Objectively, they're physical computations, not symbolic ones.
Which is why I've been talking in terms of the transformation or translation of input streams to an output streams.

Yes, but this is irrelevant.
I think it is relevant. You said "When you emulate any real thing in software... you've turned a real thing into an imaginary one". I'm saying this isn't necessarily so. When one microprocessor emulates another in software, it is just a language translation; whether it is done hard-coded on-chip or in RAM makes no difference. The same algorithm using the same instruction set can run on both processors identically. The same principle applies to one microprocessor emulating multiple others by time-slicing or other means.

But that is irrelevant to the point at issue, which is that you can't substitute an imaginary thing for a real one.
I'm trying but failing to see what imaginary thing you think I've substituted for what 'real' thing.

Physical computations are real.

The symbolic computations we associate with them are imaginary.

When you swapped the brain's neurons with other objects that performed a similar real function, you simply produced a replica brain.
Great - that's what I wanted to do.

When you attempted to introduce a simulation into the system, you swapped a real object for an object which is merely associated with the target object in your imagination (because the relationship between the simulator and the system intended to be simulated is imaginary) which won't work, for obvious reasons.
Exactly what simulation are you referring to? I talked about replacing neurons with functionally equivalent chips running neuron algorithms, which you accepted, and replacing groups of those chips with a single mutli-tasking chip that emulates them - still running the original neuron algorithms on each virtual processor. What has become imaginary?

Of course, if the simulations exists in some sort of black box along with some unknown hardware, and the real inputs (physical calculations) going into it and coming out of it are identical to the physical system you want to replace, well, in that case you've simply created a very complicated replacement part.
Eh? the inputs going into such a black box are not physical calculations, they are just modulated signals, e.g. electrical pulses. That apart, would you accept such a black box replacement part for, say, the visual cortex (assuming we could handle all the necessary inputs and outputs)?

But this simply means that you're changing parts. You're not attempting to replace a physical computation with an entirely different physical computation that is merely imagined to correspond to one that isn't different.
Well of course. We want a brain that works. My point is that if you can replace all the biological neurons with neural processors running neuron algorithms, you can also virtualise all those processors, and run the same neuron algorithms on a single multi-tasking processor. [In practice, sufficiently powerful hardware would be a problem, but the killer would probably be the timing considerations].

So, in theory, we can have a brain emulation running on a single physical processor with memory, and apart from the I/O subsystem, everything else would be software or data.

Remember, the brain isn't processing information. To think so is to mistake the information processing metaphor (the post office metaphor) for reality.

Yes, we say that an "image" is "recognized" as a "face" and "routed" through the amygdala, and so forth.

Yet none of that happens. Until we get to the processes which support conscious awareness (which are downstream) none of that language makes sense.
...
And if the information isn't in the impulses, then it makes no sense to declare that they somehow are an image of a face as far as the organ of the brain can tell at this point. They can't be. (Symbolic value requires an interpreter.)

Nor is there any structure capable of "recognizing" the impulse as "an image of a face" and therefore "routing" it anywhere.
Yes, I'm aware of all that, and I'm not claiming those things.

Well, you're either going to have to build your machanical brain neuron-by-neuron, or you're going to have to sacrifice functionality, because each input-output point has an impact on how the brain functions.
Not entirely. As you yourself said, you can replace a subsystem with a black box where the real inputs going into it and coming out of it are identical to the physical system you want to replace; and there are certainly neural circuits at the lower levels (e.g. on the order of neural columns), whose well-defined contribution could be replaced by a component that isn't based on neuron emulation.

Keep in mind that there are folks right now who are building a brain simulation in precisely this way... neural column by neural column.
Theirs is a practical attempt using today's technology. I'm interested in what is theoretically possible.
 
Um, yeah, you are.

Well, let me rephrase that -- there hasn't been a single monist in any of the threads I have participated in that shares your view.

So that makes you like 1/50. I guess that isn't a big sample size, but I wager it is still statistically significant.

Piggy is not alone, I would agree that a simulation of the whole universe would not contain a consciousness.

This is because it is a coded interpretation from a limited perspective of the universe, housed in a chip on the table.

I am not saying consciousness cannot be generated synthetically, or that it cannot in principle be generated in some form digitally. But the digital form would be very unlike the way a human experiences it and would be confined to a virtual realm.

I remind you of what I said a while back, if you are simulating physical consciousness you will need to simulate a timespace in order to capture any kind of dimensional experience.
 
What he's trying to explain to Belz is simply that if there were any conscious entity generated by the activity of the simulating machine, it would live in the same world we do, not in some other world... it might just have a different sort of experience of that world.

But one thing's for sure... if the behavior of a machine caused any incidence of conscious awareness, and that machine were also running some sort of simulation of another world, the world that its behavior is intended to simulate would NOT be the world that the conscious entity would be aware of.

It cannot be, since there's no information in the system that could possibly identify such a world out of the infinite variety of possible worlds that could be described by associating symbolic values to its state changes.

And even if there were, there is no mechanism for making that imaginary world into the world where the conscious entity exists.

OK; let's suppose this conscious entity has been developed and brought to conscious awareness of our world through a link to humanoid robot with basic movement and senses. The entity experiences the real world through the senses of this robot.

Suppose we now shut this robot in a plain empty room and then switch the entity's inputs and outputs to a simulated version of the robot used during it's development. This simulated robot has been situated in a simulation of the room the real robot is in (complete with sensory information recorded from the real room).

After the switch-over, the conscious entity would see the same room, and its robotic body would move in it just as the real robotic body moved in the real room. But the entity is now in a simulated world.

If this simulated room is now made part of a larger simulation, like that of a FPS game, with the robotic body as the FP, and an advanced physics engine to match, the entity could find itself in a new world altogether. A virtual world, certainly, but all that the entity has access to.

OTOH, if, as seems more likely, the simulation was noticeably imperfect (e.g. the entity might be surprised to find that there was nothing outside the room, or perhaps that its robotic hand passed through the door handle like a hologram), the entity would realise that something strange was going on, and might even guess what had happened, but it would still be 'trapped' in the simulated world.

Wouldn't it?
 
What do you mean? This is most obviously a direct mapping to addition. When you say "nobody", are you simply projecting? Do you really just mean, "not westprog"?

What's really interesting is that you'd assert that nobody thinks this thing, rather than bothering to ask someone if they do.
Given that you got your claim about nobody considering the rocks rolling into the back yard as addition wrong, it follows that this statement is also wrong.

The rest of your post I simply cannot understand.

In that case - which I accept, of course - mathematical calculations are being performed universally.
 
OTOH, if, as seems more likely, the simulation was noticeably imperfect (e.g. the entity might be surprised to find that there was nothing outside the room, or perhaps that its robotic hand passed through the door handle like a hologram), the entity would realise that something strange was going on, and might even guess what had happened, but it would still be 'trapped' in the simulated world.

Wouldn't it?

We could do the same thing to a human being, as in The Matrix. It will just be creating an imaginary world, though - no different in principle to reading a film or watching a book. If someone (or something) believes it to be a true reflection of reality, then he has been deceived. The person or thing will of course have a set of rules relating to his local environment, but this is no different to the set of rules for crossing the road or being elected to Congress. They sit on top of the laws of nature, they don't override or replace them.
 
At least your an honest Kurzweil disciple unlike the other computational metaphysicians on this forum.

The fact that you also believe in ufo's reveals the basis of computational metaphysics.


Computational Metaphysics? Interesting phrase. I've never heard it before. I wouldn't call myself a "disciple" of Kurzweil however. That is just the name of his book ... I think to get his point across about not being able to tell the difference from a so-called spiritual perspective. On the issue of UFOs: Seeing one myself was pretty convincing. But I believed they existed long before that from the accounts of many other people.
 
Last edited:
At least your an honest Kurzweil disciple unlike the other computational metaphysicians on this forum.

The fact that you also believe in ufo's reveals the basis of computational metaphysics.

A belief in possible intelligent machines isn't the same thing as a belief in computational consciousness. Piggy has asserted many times that intelligent machines would be possible.

I am inclined to think that myself, but I won't assert that it's definitely possible until we know a bit more. I find that my reasons for thinking that conscious machines are conceptually able to be built are philosophical rather than entirely scientific.
 
Status
Not open for further replies.

Back
Top Bottom