• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

My take on why indeed the study of consciousness may not be as simple

The universe does not make a connection between this and the other marks on the paper, only I can make the connection.

The universe does not save the brain state or relate it to previous brain states, which would be meaningless outside my head in any case.

But you are part of the universe... are you not?

So what is this fundamental difference between you doing the calculations and a neuron doing the calculations that you are concerned about?

I know of no such fundamental difference -- both you and a neuron are entirely physical aggregations of particles. What is the difference?
 
Well, my position is predicated on the assumption that it is actually possible - which is not necessarily the case.


I don't see how this can make any difference.


That's actually trivial.

There are states in the data that are produced by self-reference. If we are playing back the data in reverse (somehow), those states are still present. Therefore the self-reference is still working, therefore consciousness is active exactly as before. That sequence of state transitions doesn't suddenly stop meaning consciousness because the way we are generating them changes. That's the Church-Turing thesis.

Alright I think I see the problem -- this is another mapping misunderstanding.

Because I have been assuming that you are talking about getting the states from some kind of oracle, already generated somehow, and then simply playing them backwards in a naive fashion.

But I see now that there would be no way to get the states to begin with unless there is some mapping from each state to the next -- even for an oracle.

So somewhere, on some level, the mapping exists.

Correct?
 
Well, what I'm arguing is that the process in question differs more along the lines of "type" than "complexity". After giving this discussion a lot of thought, I've come to the conclusion that whats at issue here isn't merely a matter of information being computed, but the energetic form of the "stuff" in question.

What in the blue hell does "Energy form" mean ?

The simulated water in your example has many of the same operational properties as H20. But, physically speaking, the simulated water is completely different than drinkable water. It does not, and cannot, serve as a stand-in for a body of H20 because they do not have the same physical make-up, and therefore have completely different physical properties. The same goes for the simulated dynamo, mentioned earlier, for exactly the same reasons.

And yet intelligence works in computers, simulated or not, precisely like it does in a human.

The same also goes for biological processes like consciousness.

Speculation.
 
Up until a few years ago I would have completely agreed with this statement. But, after giving the subject more serious thought, I eventually came to the conclusion that there must be some basic underlying physics to consciousness.

In other words you made up the answer. No offense, but this is unconvincing.
 
Maybe in about ten years we will be able to model a person playing chess.

It's a simple model, I'll admit, but just because the _rest_ of what goes on in a person's head doesn't happen in the computer while it's playing chess is no reason to assume that the model is not sufficient to simulate a person playing chess. After all, Kasparov was beaten more than once, so clearly Deep Blue was playing chess alright. Now, if you're saying that it was playing chess in a completely different way, I'd like to know how you think a person plays chess.
 
It's a simple model, I'll admit, but just because the _rest_ of what goes on in a person's head doesn't happen in the computer while it's playing chess is no reason to assume that the model is not sufficient to simulate a person playing chess. After all, Kasparov was beaten more than once, so clearly Deep Blue was playing chess alright. Now, if you're saying that it was playing chess in a completely different way, I'd like to know how you think a person plays chess.

Well, human chess expertise is rather well understood, and the methods by which humans make chess decisions is entirely different from what Deep Blue uses/d.

That, of course, doesn't mean that Robin's mental model of chess decisions is at all accurate. And of course it's well-known that there are lots of different ways of implementing solutions to the same problem or even the same algorithms. A Turing machine is not a RAM which is not a game of life, despite the fact that they can all emulate each other.
 
Alright I think I see the problem -- this is another mapping misunderstanding.

Because I have been assuming that you are talking about getting the states from some kind of oracle, already generated somehow, and then simply playing them backwards in a naive fashion.

But I see now that there would be no way to get the states to begin with unless there is some mapping from each state to the next -- even for an oracle.

So somewhere, on some level, the mapping exists.

Correct?
Yep. :)
 
The simulated water in your example has many of the same operational properties as H20. But, physically speaking, the simulated water is completely different than drinkable water. It does not, and cannot, serve as a stand-in for a body of H20 because they do not have the same physical make-up, and therefore have completely different physical properties. The same goes for the simulated dynamo, mentioned earlier, for exactly the same reasons.
Simulated water acts like water in the simulation. This gives us information about how water acts in the real world.

The same also goes for biological processes like consciousness. Asserting that producing consciousness is simply a matter of flipping a particular pattern of switches in a Turing machine is like claiming that and ad hoc simulation of solar panels is an efficacious instance of photosynthesis.
Simulated consciousnes acts like consciousness in the simulation. Since consciousness is defined by how it processes information, it is identical to consciousness in the real world.
 
I guess the simplest way to put it is this: Information processing is ubiquitous throughout our physiology and almost all of it is unconscious. Therefore the defining feature of consciousness is not information processing. Its the active capacity to be aware of information as having subjective qualities.
Correct.

That is self-reference.

That's exactly the point I've been making all along.
 
What in the blue hell does "Energy form" mean ?

Meaning that its has physical properties [like electricity or H20] and that reproducing it is not simply a matter of abstracting it as a simulation.


And yet intelligence works in computers, simulated or not, precisely like it does in a human.

Intelligence =/= consciousness. A system can carry out intelligent operation w/o being conscious.


AkuManiMani said:
The same also goes for biological processes like consciousness.

Speculation.

It's no more speculative that saying that a computer simulation of photosynthesis is not photosynthesis.
 
AkuManiMani said:
The simulated water in your example has many of the same operational properties as H20. But, physically speaking, the simulated water is completely different than drinkable water. It does not, and cannot, serve as a stand-in for a body of H20 because they do not have the same physical make-up, and therefore have completely different physical properties. The same goes for the simulated dynamo, mentioned earlier, for exactly the same reasons.

Simulated water acts like water in the simulation. This gives us information about how water acts in the real world.

Agreed. Thats the point of simulations.

AkuManiMani said:
The same also goes for biological processes like consciousness. Asserting that producing consciousness is simply a matter of flipping a particular pattern of switches in a Turing machine is like claiming that and ad hoc simulation of solar panels is an efficacious instance of photosynthesis.

Simulated consciousnes acts like consciousness in the simulation. Since consciousness is defined by how it processes information, it is identical to consciousness in the real world.

Before we continue along this line of discussion I have to ask: Do you believe that a simulation of a physical phenomenon is identical said phenomenon? Why or why not?


AkuManiMani said:
I guess the simplest way to put it is this: Information processing is ubiquitous throughout our physiology and almost all of it is unconscious. Therefore the defining feature of consciousness is not information processing. Its the active capacity to be aware of information as having subjective qualities.

Correct.

That is self-reference.

That's exactly the point I've been making all along.

We've some overlap on many points but, as much as I would like to leave it at that, this is where our positions diverge on this issue. I have to ask you a series of questions to gain some insight on your understanding of the topic in question before we continue along:

[1] - In your opinion, how does it follow that self-referential information processing makes a subject aware of information being processed but not other modes of information processing?

[2] - I think it would be helpful for you to define what you mean by the "self" in self-reference. Does "self" simply constitute the information loop or the system processing said information? In other words, in a system performing SRIP what, in your view, is the conscious subject?

[3] - Earlier you mentioned that in the brain there is a "consciousness that is you" but there are other "autonomous conscious processes" linked to it that, for some reason or another, are not a part of our awareness. What makes the "consciousness that is you" the point of awareness in an individual yet not the others?

[4] - Do you think that the physical composition of the system in question has any bearing on how received information is experienced or if it is experienced at all?

[5] - Do you think that the SRIP model of consciousness provides a means of predicting what an experience would -be like- to a subject, given particular stimuli?

[6] - Lastly: Earlier you voiced strong objections to the very notion that there are "qualities". To me [and virtually anyone else] this assertion is as absurd as proclaiming that there is no such thing as "quantity". What brought you to this rather [IMO] baffling conclusion?
 
Last edited:
HypnoPsi said:
Pixy's waving around the words "self-referencing" like they're magic

In a sense, they are magic. Self-referencing systems are qualitatively different from non-self-referencing systems.


Woah...I'm sorry, but this post completely threw me for a loop. Very recently you emphatically stated that there are no such things a qualities, and that qualia are "fairy-tales". Now you're claiming that not only are SR-systems in some sense "magical" but that they are -qualitatively- different than all others?? Uhm...Do you care to clarify? :confused:
 
Last edited:
Sorry, you missed my edit. If the Church-Turing thesis is correct, then any algorithm is Turing compatible. I agree that not everything need be an algorithm.
And so a brain might not be an algorithm and still not magic.

In short, the brain might just be physical.
A Turing machine with an RNG is BPP, and it appears to be the case that BPP = P. If this is the case, what can the TM+RNG do that the TM alone cannot?
Give a different output for the same input.

I am not saying there is anything special about that, I am just saying that it is not a TM and therefore it is not true to say that a non-TM cannot implement a TM.
You don't need to keep teaching me this. What do you think about the TM+RNG? Is the RNG a clincher?
A clincher for what?
No. Do you think that the brain needs more than just an approximation of any noncomputable reals?
Again you are missing the point.

The entire computationalist position here has hinged on drkittens claim that a non TM cannot implement a TM. But if a system can use non discrete values to implement a TM then the claim is wrong.
Because I haven't heard any coherent suggestion for what else might be necessary.
And I haven't heard any coherent suggestion as to why it should be necessary in the first place.

It doesn't lead to a verifiable, falsifiable hypothesis. It does explain anything. It is not simpler than the other hypothesis, in fact it is more extravagant.

It is not the obvious position or the default position - it seems more probable that the brain has non-discrete processes and would therefore not classify as an algorithm.

So I say the mind may or may not be an algorithm. I don't know. What is wrong with that? Why assume a position with no explanatory power?
What? Are you suggesting that there is something that is no more powerful than a Turing machine yet is not compatible with one?
No I am suggesting that there is something that is no more powerful than a Turing machine that is not a Turing machine.
Even if there are, is this a deep problem? Any nondiscrete process can be performed to any degree of accuracy by a Turing machine.
It would be a fatal problem to the claim that the mind was an algorithm.

It would not be a deep problem to modelling the mind.
I have no vested interest in the brain being a Turing machine. Certainly the haphazard parallel processing might be an issue. I'm just waiting for a compelling argument that the brain can't be a Turing machine.
And I am just waiting for a compelling argument that the brain must be a Turing Machine.

I am not rooting for any team.

But when there is an extravagant, unverifiable, unfalsifiable metaphysical claim infallibly declared (and I dont mean you) on the basis of a questionable interpretation of a well-known mathematical result, then that seems the very definition of woo to me.

The very best knowledge we will ever have on the subject will be in the form of falsifiable, verifiable hypotheses.

And I don't think there is one for either position at the moment. In fact we will probably have great advances in neuroscience without even caring about the answer to this question any more than we care about whether or not there really is a smeared out dead/alive cat in Schroedinger's box.
 
Last edited:
But you are part of the universe... are you not?

So what is this fundamental difference between you doing the calculations and a neuron doing the calculations that you are concerned about?

I know of no such fundamental difference -- both you and a neuron are entirely physical aggregations of particles. What is the difference?
Physical pathways of information. The neurons communicate with each other.

The calculations I make do not communicate with each other.

I could take all the calculations from the recheck and encode them onto a few million small computing devices ensuring that no device had consecutive instructions and fire them into space in various directions until they were at least a light year from each other and had no path of communication whatsoever.

And so according to the theory a real unified human consiousness would result across many light years with no communication whatsoever between the components creating this consciousness.
 
Woah...I'm sorry, but this post completely threw me for a loop. Very recently you emphatically stated that there are no such things a qualities, and that qualia are "fairy-tales". Now you're claiming that not only are SR-systems in some sense "magical" but that they are -qualitatively- different than all others?? Uhm...Do you care to clarify? :confused:
Similarly PixyMisa was very hostile to the idea that a self-driving car was self aware in the same way that a human was self aware.
 
In other words you made up the answer. No offense, but this is unconvincing.

There was a line of reasoning, based off of established givens, that led to the conclusion. It didn't just pop into my head out of a vacuum O_o
 

Back
Top Bottom