Has consciousness been fully explained?

Status
Not open for further replies.
That has nothing to do with anything.

My question was if you can come up with non-linear internal changes.

You respond with examples of the environment changing because a rock is sitting there.

That is not a non-linear change within the rock. That is a non-linear change within the environment.

Do you have any examples of non-linear changes within the rock?

Brilliant. I give an example, you cut it, and ask for an example.

Couldn't make it up. You delete the part where I answer the question, then ask the question.
 
Now remember, I am not asking if the simulation is conscious, I am asking if the simulation will produce the external behaviour that we observe in a conscious human. A lot of people here seem to avoid that question.

Will it, for, example, claim it has a Sofia?

It seems to me that if the matter in our brains acts as physics says it should, then it will make that claim.

But that particular question does not skirt the issue I'm raising.

If you produce a perfect computer simulation of a human being, in the same sense that we might produce a perfect computer simulation of a space probe, then the simulation will "do" whatever a person does, in simulation-space (but not in objective physical reality).

If you produce a perfect working model of a human being, in the same sense that we might produce a perfect scale model of a Toyota Tacoma, then it will actually do all of the things that a human does in real space.

Either way, if reporting Sofia events is something a human would do, the perfect simulation or the perfect model will "do" this as well. But only in the latter case will there be an actual Sofia event going on in objective physical reality.

So you see, it comes right back to the issue I was posing.
 
Brilliant. I give an example, you cut it, and ask for an example.

Couldn't make it up. You delete the part where I answer the question, then ask the question.

Actually, I responded to your post before you EDITED it to include what you call "an example."

But no need for me to point fingers -- you still didn't provide an example.

Or do you consider
westprog said:
All the behaviours of the rock which are observable are a result of interaction with the environment. That's why they are observable. The same goes for the chip, and the cell. If they are entirely internal, we don't know they are happening.

Of course there are hugely complex behaviours inside the rock. The mere fact of heating up and cooling down involves exchanges of energy involving every molecule in the rock, in a fashion so complex that we couldn't begin to simulate it.
to be an example of a non-linear internal behavior?

I don't think you get it. I really don't. Lets make it as simple as it could possibly be.

You have a rock sitting on a hill. Something heats up the rock 1 degree. The rock is still sitting on the hill. Nothing else on the hill changes. You do this until you are 1 degree shy of the melting point of the rock. Each small increase in temperature results in ... almost no change at all.

Then something heats up the rock 1 more degree. The rock melts into lava -- your favorite liquid. That is a non-linear internal change. Because it is the same increase in temperature as before but all of a sudden the rock melts. Why do I need to explain "non-linear" to you?

Furthermore, the lava flows down the hill until it cools. Everything in its path is drastically changed. All of this as the result of a 1 degree increase in the environment around the original rock. And that is the essence of computation -- a non-linear change in some system drastically changes not just how it behaves but also the rest of the meta-system around it behaves. The rock melted, and a whole bunch of stuff changed drastically.

In fact, you could build a computer out of melting rocks, because a melting rock is kind of like a transistor -- small change in external state, big change in internal state.

If you can't understand that, I am sorry for you. Well, not really ...
 
Last edited:
That is not the Turing test.

It is if there was a being in the pencil and paper simulation named Alan Turing, who came up with a test to see if other entities in the simulation had the same kind of subjective experience as it.

Try again.
 
Not if the tester was also in the paper and pencil simulation.
C:\GAMES\DUALIST> DUALIST
WELCOME TO DUALIST ADVENTURE! DO YOU WISH TO CONTINUE WHERE YOU LEFT OFF?
> Y
ONE MOMENT, LOADING GAME...
YOU ARE IN A WHITE ROOM. THERE IS A CHALLENGE TO YOUR WORLDVIEW HERE. THERE ARE EXITS LEADING TO IDEALISM AND NEUTRAL MONISM. WHAT DO YOU WANT TO DO?
> INVENTORY
YOU HAVE:
ELEVEN FALLACIES
A PEANUT
A DEAD MOUSE
> USE FALLACY
WHICH FALLACY DO YOU WISH TO USE?
>REIFICATION
THE REIFICATION FALLACY ONLY IRRITATES THE CHALLENGE. YOU HAVE TEN FALLACIES LEFT. WHAT DO YOU WISH TO DO NOW?
> USE FALLACY
WHICH FALLACY DO YOU WISH TO USE?
> ARGUMENT FROM INCREDULITY
WHAT, AGAIN?
> Y
YOU'RE NOT VERY GOOD AT THIS, ARE YOU?
> QUIT
DO YOU WISH TO SAVE YOUR GAME?
> ^C
EXITING GAME. YOUR SCORE IS: 0 OUT OF A POSSIBLE INFINITY. THANK YOU FOR PLAYING DUALIST ADVENTURE!
C:\GAMES\DUALIST> FORMAT C:
THIS WILL REMOVE ALL DATA ON THE DISK. ARE YOU SURE? [y/n]
> Y
 
But that particular question does not skirt the issue I'm raising.

If you produce a perfect computer simulation of a human being, in the same sense that we might produce a perfect computer simulation of a space probe, then the simulation will "do" whatever a person does, in simulation-space (but not in objective physical reality).

If you produce a perfect working model of a human being, in the same sense that we might produce a perfect scale model of a Toyota Tacoma, then it will actually do all of the things that a human does in real space.

Either way, if reporting Sofia events is something a human would do, the perfect simulation or the perfect model will "do" this as well. But only in the latter case will there be an actual Sofia event going on in objective physical reality.

So you see, it comes right back to the issue I was posing.
Actually, no, you seem to have forgotten what you asked me if you think I am insisting that the simulation has Sofia.

Let us say the simulation doesn't have Sofia.

But it does very adamantly insist that it does have Sofia and even mocks anybody who suggests they don't.

So while the science behind the model does not explain the Sofia, it would clearly explain why we claim to have a Sofia.

Which suggests, as I said, that the reason you claim to have a Sofia has nothing to do with the fact that you have a Sofia.
 
Last edited:
I did mess that one up, but wasn't going to call myself out. I justified this on the grounds that the point was about neural activity rather than specific methods of obtaining information about that neural activity. It's absolutely true that in all such noninvasive methods have a limited and variable resolution in providing information about the actual neural activity.

:cool:
 
I usually try to avoid using programming terminology wrt the brain - not because the analogies aren't apposite, but because it leads to the acceptance that they aren't analogies but actual description. I sometimes fail in this, though, because programming analogies come so readily to hand.

I am not even sure that habitual patterns are analogous for 'programs'.
 
Only 25 pages to discover what most understand; consciousness has not been explained at all.

Some wish it away by definition.
 
I did mess that one up, but wasn't going to call myself out. I justified this on the grounds that the point was about neural activity rather than specific methods of obtaining information about that neural activity. It's absolutely true that in all such noninvasive methods have a limited and variable resolution in providing information about the actual neural activity.
:cool:
Yeah I got sloppy, it happens :boxedin:
The point about the information on neural activity remains. It can be debated, but the emprical grounds remain even if the wrong piece of equipment was named at one point.
 
Actually, no, you seem to have forgotten what you asked me if you think I am insisting that the simulation has Sofia.

Let us say the simulation doesn't have Sofia.

But it does very adamantly insist that it does have Sofia and even mocks anybody who suggests they don't.

So while the science behind the model does not explain the Sofia, it would clearly explain why we claim to have a Sofia.

Which suggests, as I said, that the reason you claim to have a Sofia has nothing to do with the fact that you have a Sofia.

I consider that there are too many hypotheticals for this (albeit interesting) idea to prove anything. It would certainly be interesting to produce an exact simulation of a person - but there's no guarantee that it would report Sofia, and even if it did, that would not conclusively prove that it was reporting Sofia for the same reason that a person reported Sofia. However, it would be possible to find out exactly why it was reporting Sofia.

The reason that people think they have Sofia is because they do actually have the experience of Sofia. There's a possibility that they report Sofia for a different reason, but that is only a possibility.
 
It's not possible to calculate the behaviour of three bodies and their gravitational attraction. It's possible to approximate it using numeric methods.

http://en.wikipedia.org/wiki/Three-body_problem
Yes, this is probably the simplest feedback system in physics, and it can only be approximated.

Note that this doesn't apply to ideal fluids, as there are no forces acting on the particles between collisions. Of course the ideal case is only an approximation for real world particles.

It has a mild analogy with how we think, as we can change our problem solving method, in our head, in the middle of solving the problem, without starting over. In the 3 body problem, the attractive forces keep changing from 2 directions as we solve for the motion.

Two body problems are easier, as technically we can even treat a 2 body problem as if only 1 particle was moving, via Galilean Relativity. Conservation insures that simply adding back the subtracted motion, equally and opposite, gives the same answer. It's a little dumb to do it that way. However, these symmetries makes 2 bodies trivial to calculate. It also works on limited 3 body cases, where the 3rd mass is so small it has no attractive effect on the other two masses.
 
Yes, this is probably the simplest feedback system in physics, and it can only be approximated.


But, it can be "approximated" to an arbitrary degree of precision. And since the 3-body system is also limited in precision (by, at the very least, quantum uncertainties in the initial state), the actual movement of actual bodies in space, and comparable analog systems, do not constitute hypercomputation. (The notion that a three-body system is even Turing-equivalent is highly speculative.)

It is very unlikely that approximation errors would be any significant barrier to a functionally equivalent simulation of a human brain. A real brain continues to generate conscious awareness even under conditions that measurably change the response characteristics of individual neurons; e.g. alcohol intoxication or hypoglycemia (within limits). Consequently there's no reason to expect that a simulation of a brain would fail to function because of the far more subtle effects of rounding error in the 100th decimal place.

Respectfully,
Myriad
 
But, it can be "approximated" to an arbitrary degree of precision. And since the 3-body system is also limited in precision (by, at the very least, quantum uncertainties in the initial state), the actual movement of actual bodies in space, and comparable analog systems, do not constitute hypercomputation. (The notion that a three-body system is even Turing-equivalent is highly speculative.)
Yes, considering ideal gas law for instance, the approximations don't even have to include location information of any particular particle and we still get arbitrarily high degree state information of the ensemble. It's true that, even for classical systems, we can't prove it's fundamentally Turing in nature, only that the observables are Turing equivalent. I used equivalent in a slightly different sense than you in that equivalent doesn't indicate what it fundamentally is, only that the result is equivalent to the expectations of an ensemble of Turing machines.

It is very unlikely that approximation errors would be any significant barrier to a functionally equivalent simulation of a human brain. A real brain continues to generate conscious awareness even under conditions that measurably change the response characteristics of individual neurons; e.g. alcohol intoxication or hypoglycemia (within limits). Consequently there's no reason to expect that a simulation of a brain would fail to function because of the far more subtle effects of rounding error in the 100th decimal place.

Respectfully,
Myriad
True, approximation errors as noted is not even a barrier when no location information of any singular Turing device is included (gas law). Even in a Quantum system, if you've ever heard of Exact Uncertainty, it was derived from the assumption that the uncertainty resulted from a quantum level equivalent of Brownian motion. Which allowed the Schroedinger's equation to be derived directly from the Uncertainty Principle.
J. Phys. A 35 (2002) 3289-3303 www.iop.org/EJ
http://arxiv.org/abs/quant-ph/0102069
It remains that in QM quantization pertains to properties, not particles as such. So what meaning is taken from this remains suspect.

Yes, the real brain continues to generate conscious awareness when individual neurons are knocked out completely. Much the same way perturbing individual molecules has no effect on the accuracy of gas law. So you are correct that the accuracy of pen and paper is sufficient in principle, yet pen and paper doesn't have an explicit system to self refer back to a unique subset of itself, or a generalization of itself. You can add this after the fact, but it wasn't an explicit part of the initial system, nor is the self referring back an explicit result of the initial calculation in adding it. That connection came from you. You've effectively proved you have the properties your trying to recreate on paper.

Yet as even the quantum case involving Exact Uncertainty illustrates, Turing fundamentals, are not a barrier to creating a machine with the property of consciousness, in a sense such as ours. Nor is the approximations. Just that such machine must maintain persistent causal connections not inherent in pen and paper. The pen and paper causal connection is maintained only through the pen operator, you. Thus demonstrates a set of your properties.

I'm looking to the kind of hardware needed to accomplish it. The standard artificial neurons come close, but the logical architecture is too restrictive in defining what constitutes inputs and outputs. What is the output of your idea, before you write it down or express it? Even your memory is not a recording, but an associative reconstruction from bits of data. It's why eyewitnesses are such horrible witnesses, and produces false memories so easily.

We'll get there one day, and our present hardware artificial neurons are not excessively deficient, but not with the present methodology of thinking about it.
 
Yeah I got sloppy, it happens :boxedin:
The point about the information on neural activity remains. It can be debated, but the emprical grounds remain even if the wrong piece of equipment was named at one point.

Sure I just thought I had missed something. I was not trying to score points.
 
Status
Not open for further replies.

Back
Top Bottom