• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

My take on why indeed the study of consciousness may not be as simple

AkuManiMani said:
Every phenomenon is a 'behavior'. Heck, H20 is a behavior of subatomic particles, which are themselves a behavior, and so on. My point is that awareness is a specific class of behavior. As such, it would make sense to discuss it's defining properties.
It makes sense, but it's difficult.

Irregardless of what special significance one may, or may not, want to give to consciousness, what I'm saying is that: [1] If consciousness were simply a matter of computation or "complex behavior", we would never be unconscious
What? Who says the computation has to occur all the time?

and [2] If consciousness is a specific kind of physical phenomenon [which is the case I'm making here], reproducing it is not simply a matter of computer simulation. Like with the water example, a dynamo, or photosynthesis, one needs to physically generate the real deal. Scientifically and technologically speaking, I don't think we're there yet.
You keep asserting this, but you need to explain why consciousness falls into the category of processes where simulation is not equivalent to the real thing. Adding 2 + 2 on a computer is equivalent to doing it on my fingers.

~~ Paul
 
westprog said:
I've already given a trivial example of a non-algorithmic system - a Turing machine with a random number generator attached.
And so is that how you think the brain might be and thus be non-algorithmic? If so, can you find a reference to a paper that discusses a Turing machine with a random number generator attached? We still don't know what that does to the machine's power.

Non-deterministic, non-algorithmic systems are possible. We don't know if the brain is an algorithmic system. We do know that there is nothing in the description of a Turing machine that predicts consciousness. The only reason that anyone asserts that a Turing machine can be conscious is that it is assumed that the brain is a Turing machine, and that human beings are conscious.
There is nothing in the description of a Turing machine that predicts much of anything. Anything interesting is an emergent property.

There is another reason why I assert that a Turing machine can be conscious: No one has identified anything in the brain that makes me think it might be super-computational.

But the only reason that anyone ascribes consciousness to algorithms is because it occurs in human beings. It doesn't explain anything about machine behaviour. It doesn't make any predictions about what a computer program or robot will do. Everything that they do is already predetermined by the algorithm. Knowing that they are or are not conscious is of no use to anyone.
Of course the reason we would ascribe consciousness to a simulation of a human is because it occurs in real humans. That is true for the properties of any simulation: They are equivalent to the properties of the (non-computer) thing being simulated. In the case of consciousness, it's just a question of defining that set of behaviors that we call consciousness and then seeing if the computer exhibits equivalent behaviors.

Except perhaps... Exactly. It is possible for non-algorithmic systems to exist. I've given a trivial example - it's not useful, but then neither is a typical Turing machine. So if non-algorithmic systems can exist, how can we know that consciousness is purely algorithmic?
The trick is to select various non-algorithmic systems and see if there is any reason to think the brain might include those extra features. Except for a random number generator, I dare you to come up with anything interesting. At least, something interesting that doesn't already beg the question (e.g., a libertarian free will decision-maker).

In the case of the random number generator, we don't know what it does to a Turing machine power. And even if it does something useful, I don't see why it is any threat to a physicalist description of the brain.

~~ Paul
 
No ? I was under the impression that it modeled a person playing chess.

It models a very, very limited aspect of a person playing chess. In the way moves are selected, a totally different approach is used in most cases.
 
Well, what I'm arguing is that the process in question differs more along the lines of "type" than "complexity". After giving this discussion a lot of thought, I've come to the conclusion that whats at issue here isn't merely a matter of information being computed, but the energetic form of the "stuff" in question.
Sorry, but this defeats the behavioral defintion scheme. One could classify the uderlying strata but that defeats the purpose of the use of the behavioral terms in the frist place.

because then you are making assumptions about the black box.

If a system exhibits the behaviors defined as consciousness then it is conscious. The construction of the system is not relevant to the behavioral usage.
The simulated water in your example has many of the same operational properties as H20. But, physically speaking, the simulated water is completely different than drinkable water. It does not, and cannot, serve as a stand-in for a body of H20 because they do not have the same physical make-up, and therefore have completely different physical properties. The same goes for the simulated dynamo, mentioned earlier, for exactly the same reasons.

The same also goes for biological processes like consciousness. Asserting that producing consciousness is simply a matter of flipping a particular pattern of switches in a Turing machine is like claiming that and ad hoc simulation of solar panels is an efficacious instance of photosynthesis.


No because cosnsciousness does not describe the wetness of water or the electricity of genration.

It is a description of the behavior of a complex system, why would it matter what that system is made of?
 
And so is that how you think the brain might be and thus be non-algorithmic? If so, can you find a reference to a paper that discusses a Turing machine with a random number generator attached? We still don't know what that does to the machine's power.

It allows it to be non-deterministic, which a Turing machine can't do. Hence it can do things a Turing machine can't, and hence it is more powerful.

That doesn't mean that the brain is just Turing + randomness. It means that non-algorithmic systems are possible. It might be that a non-algorithmic system is required for consciousness.

There is nothing in the description of a Turing machine that predicts much of anything. Anything interesting is an emergent property.

There is another reason why I assert that a Turing machine can be conscious: No one has identified anything in the brain that makes me think it might be super-computational.


Of course the reason we would ascribe consciousness to a simulation of a human is because it occurs in real humans. That is true for the properties of any simulation: They are equivalent to the properties of the (non-computer) thing being simulated. In the case of consciousness, it's just a question of defining that set of behaviors that we call consciousness and then seeing if the computer exhibits equivalent behaviors.


The trick is to select various non-algorithmic systems and see if there is any reason to think the brain might include those extra features. Except for a random number generator, I dare you to come up with anything interesting. At least, something interesting that doesn't already beg the question (e.g., a libertarian free will decision-maker).

In the case of the random number generator, we don't know what it does to a Turing machine power. And even if it does something useful, I don't see why it is any threat to a physicalist description of the brain.

~~ Paul

Of course it's not a threat to a physicalist description of the brain. I've always favoured a physical approach rather than the abstract approach of the Turing machine.
 
It allows it to be non-deterministic, which a Turing machine can't do. Hence it can do things a Turing machine can't, and hence it is more powerful.

No, it doesn't work that way: it cannot compute anything a Turing machine cannot. It is therefore not any more computationally powerful. This is because a NDTM can be simulated by a DTM.

I've always favoured a physical approach rather than the abstract approach of the Turing machine.

But apparently you have no problem with the abstract approach of using language.
 
westprog said:
It allows it to be non-deterministic, which a Turing machine can't do. Hence it can do things a Turing machine can't, and hence it is more powerful.
Only if the algorithm needs an infinite number of random numbers. Otherwise we can just initialize the tape with them. I think we really need to find an analysis of Turing machine + RNG.

That doesn't mean that the brain is just Turing + randomness. It means that non-algorithmic systems are possible. It might be that a non-algorithmic system is required for consciousness.
Again, if we can't come up with a non-algorithmic requirement nor even a coherent description of an interesting non-algorithmic machine, then the point is rather moot.

~~ Paul
 
cyborg said:
No, it doesn't work that way: it cannot compute anything a Turing machine cannot. It is therefore not any more computationally powerful. This is because a NDTM can be simulated by a DTM.
That covers nondeterministic Turing machines. Can we come up with something super-computational that is more than a NDTM? In particular, what about a Turing machine with an RNG?

~~ Paul
 
Here is a paper with an interesting footnote:

http://www.lomont.org/Math/Papers/2008/Lomont_PRNG_2008.pdf

1 The class of problems efficiently solvable on a (Turing) machine equipped with a random number
generator is BPP, and it is an open problem if BPP=P, P being the class of problems efficiently solvable on
a computer without random choice.

BPP is bounded-error, probabilistic, polynomial time.

From Wiki:
The existence of certain strong pseudorandom number generators is conjectured by most experts of the field. This conjecture implies that randomness does not give additional computational power to polynomial time computation, that is, P=RP=BPP.


And here is a relevant paper by Taner Edis:

http://www.springerlink.com/content/t36552h6169u6mp8/

~~ Paul
 
Last edited:
When faced with unsupported assertions, personal incredulity fits the bill nicely.

I just thought it was ironic that someone who mindlessly asserts "Wrong" every other post while constantly making baseless assertions would criticize others for the very same thing. Its rather like that conversation between the pot and the kettle ;)
 
Last edited:
AkuManiMani said:
Irregardless of what special significance one may, or may not, want to give to consciousness, what I'm saying is that: [1] If consciousness were simply a matter of computation or "complex behavior", we would never be unconscious

What? Who says the computation has to occur all the time?

Thats not quite what I'm getting at. I'm pointing out that every bit of living tissue in our bodies actually does compute all the time.

AkuManiMani said:
and [2] If consciousness is a specific kind of physical phenomenon [which is the case I'm making here], reproducing it is not simply a matter of computer simulation. Like with the water example, a dynamo, or photosynthesis, one needs to physically generate the real deal. Scientifically and technologically speaking, I don't think we're there yet.

You keep asserting this, but you need to explain why consciousness falls into the category of processes where simulation is not equivalent to the real thing. Adding 2 + 2 on a computer is equivalent to doing it on my fingers.

~~ Paul

Keep in mind, that when I talk of consciousness I'm not simply referring to the capacity to manipulate numbers. I'm talking about the capacity to -experience- information being processed as having some subjective quality. Its one thing to create a system in which an input is computed, its quite another to have a system that experiences that input as a sensation or emotion. I'm of course referring to:

qua⋅le  [kwah-lee, -ley, kwey-lee]
–noun, plural -li⋅a  [-lee-uh]
Philosophy.
1. a quality, as bitterness, regarded as an independent object.
2. a sense-datum or feeling having a distinctive quality.

One cannot simply simulate sensation. A sensation is either produced, or it is not. What we do not know is what makes humans and other organisms sensible, let alone how to reproduce sensibility. Simply reacting to a stimulus to produce an output is clearly not sufficient, as our own brains and bodies respond to stimuli all the time, with or without being conscious.

As I already pointed out, only a particular kind of tissue [in humans atleast] seems able to produce this capacity, and then only within a particular range of states. I'm of course referring to neural tissue. Like every other tissue line, they form an intercellular network of communication that processes information and coordinates biological activity. Whatever is it about -these- group of cells that allows them to produce conscious experience, isn't simply a matter of processing information. It stands to reason that it must be the physical context in which the information is being processed that translates it into what we call qualia. This means that, like electricity or water, consciousness has essential physical properties that cannot be reproduced via simulation.
 
Last edited:
AkuManiMani said:
Well, what I'm arguing is that the process in question differs more along the lines of "type" than "complexity". After giving this discussion a lot of thought, I've come to the conclusion that whats at issue here isn't merely a matter of information being computed, but the energetic form of the "stuff" in question.

Sorry, but this defeats the behavioral defintion scheme. One could classify the uderlying strata but that defeats the purpose of the use of the behavioral terms in the frist place.

because then you are making assumptions about the black box.

If a system exhibits the behaviors defined as consciousness then it is conscious. The construction of the system is not relevant to the behavioral usage.

The "behavior" I'm referring to isn't just the capacity to process and react to stimuli, but to experience stimuli as some quality/sensation. The only known instances of consciousness are limited to systems of a particular composition. We have no means of directly observing consciousness in systems other than our own bodies. Therefore its crucial to understand what physical properties of our physiology produce consciousness in us so that we can use that knowledge to identify it in other systems.

AkuManiMani said:
The simulated water in your example has many of the same operational properties as H20. But, physically speaking, the simulated water is completely different than drinkable water. It does not, and cannot, serve as a stand-in for a body of H20 because they do not have the same physical make-up, and therefore have completely different physical properties. The same goes for the simulated dynamo, mentioned earlier, for exactly the same reasons.

The same also goes for biological processes like consciousness. Asserting that producing consciousness is simply a matter of flipping a particular pattern of switches in a Turing machine is like claiming that and ad hoc simulation of solar panels is an efficacious instance of photosynthesis.

No because cosnsciousness does not describe the wetness of water or the electricity of genration.

It is a description of the behavior of a complex system, why would it matter what that system is made of?

I'm not really following you on this one. What do you mean when you say that consciousness is a "description"? :confused:
 
Last edited:
That covers nondeterministic Turing machines. Can we come up with something super-computational that is more than a NDTM? In particular, what about a Turing machine with an RNG?

~~ Paul

I don't see how - if the RNG is used to decide on a set of possible state transitions its equivalent then it's just a way of describing something equivalent to the NDTM. If it's used as an output it's just a RNG. Tautologically a RNG is not computable - and I don't think you can call it super-computational.

Either way the only thing a RNG would add.. is an RNG.
 
Cool. Thanks. May I paraphrase further?

You're a funny guy RD. :D

Again, I see nothing wrong with your "paraphrasing."

Your method of argument reminds me of that cliche popular kid in junior high that makes fun of everyone by simply repeating back what they say in a contemptuous questioning tone of voice and scrunching up their face in that "yeaahhh, riiiight, as if" valley girl expression.
 
It allows it to be non-deterministic, which a Turing machine can't do. Hence it can do things a Turing machine can't, and hence it is more powerful.

Nope.

Cyborg has already said why, and he is correct.

And Paul actually gave links.

So ... nope.
 
I think you're premature to say "too". All I'm arguing is that, essentially, having something physical perform each of the calculations doesn't seem to be sufficient to produce what we think of as consciousness. But that doesn't mean I don't have other ideas of things which may be sufficient, that would even allow for the backwards in time scenarios.

What I was arguing in the last post was simply that you were wrong about the interdependencies affecting this scenario at all. It may very well be true that to run A', I would absolutely have to do particular things in particular orders, due to the serial nature. But the reason I have to do that is because it's (effectively) impossible to predict the inputs of any arbitrary calculation.

But when running N as a desk check, this restriction is lifted, because we've already run A'. As such, we know it's history. We don't have to predict anything--we "postdict" it, which is much easier.

What I'm trying to feel for is what beyond just calculating the same things you feel is sufficient, if you don't believe N alone produces, effectively, every type of consciousness imaginable (we can map the calculations to N). If you require, say, that the interdependencies between the calculations also be modeled in order to produce consciousness, then you would get to say that the N machine isn't conscious (it does the calculations, but nothing in it affects anything else it would do).

Or you could also go the route that N does produce consciousness, but is not alone... because we did, after all, run A', then we mapped out each of the NAND gates in A', and then sorted them into N, and then ran N, so somewhere there must be a physical "representation" of the mapping... and it's that representation plus N that produces consciousness.

But that's the sort of thing I'm looking for... what it is you're arguing.

(And technically I was hoping PixyMisa would chime in, but I'm interested in your position as well).

As I said in my response to Pixy's backwards thing, the "desk check" is no longer the same information processing as the original forwards run. Because of exactly what you said -- you can't predict the inputs of an arbitrary calculation (although I wouldn't word it that way).

Especially if we are just playing the states backwards. A whole chunk of the original processing -- determining the next transition from within the system -- gets replaced by whatever mechanism is doing the playback, which is necessarily outside the system.

Now, that is not to say that backwards A might not be conscious in some form or another. I am just saying that it won't be conscious in the same way as A, and neither would N, for the same reason.

So yes, I would argue that the interdependencies among the calculations are just as important, if not more important, than the actual calculations.

In particular, I defy Pixy to come up with way to satisfy the notion of self-reference in the reverse playback scenario.
 
Again, I see nothing wrong with your "paraphrasing."


That you agree to, essentially, having posed the following question may be my favorite. Thank you.

Are you asking that if we are in a past life regression, is there any mathematical reason that a property or entity in that past life regression could not be reproduced in a past life regressed by some other method?


Your method of argument reminds me of that cliche popular kid in junior high that makes fun of everyone by simply repeating back what they say in a contemptuous questioning tone of voice and scrunching up their face in that "yeaahhh, riiiight, as if" valley girl expression.

You know, my wife does that sometimes. I never had to put up with that when I was in school, though.

Maybe your parents could ask the teacher to get the other kids to quit picking on you?
 
As I said in my response to Pixy's backwards thing, the "desk check" is no longer the same information processing as the original forwards run. ... So yes, I would argue that the interdependencies among the calculations are just as important, if not more important, than the actual calculations.
And that's exactly what I'm looking for... thanks!
In particular, I defy Pixy to come up with way to satisfy the notion of self-reference in the reverse playback scenario.
Well, let me have a go...
Now, that is not to say that backwards A might not be conscious in some form or another. I am just saying that it won't be conscious in the same way as A, and neither would N, for the same reason.
We could climb the same way from A to a backwards machine. We start by making an equivalent machine Ab, which runs how we like--in this case, though, let's choose a layer that has T symmetry. Let's say Ab runs using ideal billiard ball physics (don't have such a thing? Not a problem--simulate it and run Ab on that layer). Now to build B, we simply run Ab in reverse order.

So we have a B and an N now, but there's a significant difference. If I had B, and that's all I had, I could reconstitute Ab trivially. I just run B backwards. But if I had N, and that's all I had, I have no way to get to A'.

So if the information flow is significant, there's a B you could run for any A that's the same, only backwards. And the claim wasn't quite that any consciousness produced by B would be equivalent to A--it was that it would be equivalent, only backwards.
 
Last edited:
Since this is counter-intuitive, it demonstrates exactly what I claimed--that is, that it doesn't seem to be sufficient to physically perform the same calculations.
Unfortunately, that's just the argument from personal incredulity.

I admit that I find in counter-intuitive too, but I see no reason to actually reject the conclusion.
 

Back
Top Bottom