• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

My take on why indeed the study of consciousness may not be as simple

UndercoverElephant said:
I hope that was supposed to be sarcastic.
Why? Do you think the digestive system is super-computational?

"inner behaviour"???

This term means nothing to me. Behaviour is outer. I can observe behaviour. I can observe "inner behaviour" in your brain. I can't observe your consciousness.
You can restrict behavior to external behavior if you like, but I think it's useful to talk about inner behavior, too. For example, a panic attack, even if largely internal, is surely a behavior. Fight or flight, even if largely internal, is surely a behavior. Perhaps conscious thinking evolved as a largely internal behavior, piggybacking on existing behavioral mechanisms, to help the organism analyze and respond to the environment.

If internal experience is not behavior, then where is the line between behavior and internal experience?

~~ Paul
 
No more than the Theory of Evolution or the Theory of Relativity is "only a theory."

The Church-Turing thesis is a proof that all definitions of information processing so far proposed are equivalent. At this point, the list of actual definitions is quite lengthy. We don't even have a well-formed definition of anything more powerful than a Turing machine -- except for the explicitly counterfactual notion of "oracle computing," which rather blatantly assumes magic.

So Robin and Westprog are in the rather uncomfortable position of asking whether or not it's possible to exceed the speed of light. I have told them, several times, that relativity theory says that it isn't. They then ask if relativity theory applies to angelic unicorns that are defined to have the ability to exceed the speed of light at will.
Except that they won't actually come out and even give a hypothetical example of something that contradicts the established mathematical and physical basis for the universality of computational simulations.

It's just nuh-uh! Again and again and again.

And the argument from personal incredulity wears thin after seventy or eighty repetitions.
 
Last edited:
And why is this awareness more than a behavior?

Every phenomenon is a 'behavior'. Heck, H20 is a behavior of subatomic particles, which are themselves a behavior, and so on. My point is that awareness is a specific class of behavior. As such, it would make sense to discuss it's defining properties.


I don't know why you're placing consciousness on a special pedestal. Breathing is different from blood pumping, too. Is one of them special? I agree that consciousness is different from other bodily functions, but why is it different in a special way?

~~ Paul

Irregardless of what special significance one may, or may not, want to give to consciousness, what I'm saying is that: [1] If consciousness were simply a matter of computation or "complex behavior", we would never be unconscious and [2] If consciousness is a specific kind of physical phenomenon [which is the case I'm making here], reproducing it is not simply a matter of computer simulation. Like with the water example, a dynamo, or photosynthesis, one needs to physically generate the real deal. Scientifically and technologically speaking, I don't think we're there yet.

As to whether consciousness is special... All I can say is that without it there would be no one to consider anything to be of significance anyway :p
 
If consciousness were simply a matter of computation or "complex behavior", we would never be unconscious

Airport control systems and nuclear power plant control systems are both examples of "complex behaviour" - there is not an argument that these are equivalent behaviours.

There is as such no argument that consciousness is simply complex behaviour - it is a specific class of complex behaviour.
 
Airport control systems and nuclear power plant control systems are both examples of "complex behaviour" - there is not an argument that these are equivalent behaviours.

There is as such no argument that consciousness is simply complex behaviour - it is a specific class of complex behaviour.
Is there a hierarchy within this class of complex behaviors?
 
Yeah but I disagree with Pixy about the backwards thing too, so my argument still stands. See my post on that.
I think you're premature to say "too". All I'm arguing is that, essentially, having something physical perform each of the calculations doesn't seem to be sufficient to produce what we think of as consciousness. But that doesn't mean I don't have other ideas of things which may be sufficient, that would even allow for the backwards in time scenarios.

What I was arguing in the last post was simply that you were wrong about the interdependencies affecting this scenario at all. It may very well be true that to run A', I would absolutely have to do particular things in particular orders, due to the serial nature. But the reason I have to do that is because it's (effectively) impossible to predict the inputs of any arbitrary calculation.

But when running N as a desk check, this restriction is lifted, because we've already run A'. As such, we know it's history. We don't have to predict anything--we "postdict" it, which is much easier.

What I'm trying to feel for is what beyond just calculating the same things you feel is sufficient, if you don't believe N alone produces, effectively, every type of consciousness imaginable (we can map the calculations to N). If you require, say, that the interdependencies between the calculations also be modeled in order to produce consciousness, then you would get to say that the N machine isn't conscious (it does the calculations, but nothing in it affects anything else it would do).

Or you could also go the route that N does produce consciousness, but is not alone... because we did, after all, run A', then we mapped out each of the NAND gates in A', and then sorted them into N, and then ran N, so somewhere there must be a physical "representation" of the mapping... and it's that representation plus N that produces consciousness.

But that's the sort of thing I'm looking for... what it is you're arguing.

(And technically I was hoping PixyMisa would chime in, but I'm interested in your position as well).
 
Irregardless of what special significance one may, or may not, want to give to consciousness, what I'm saying is that: [1] If consciousness were simply a matter of computation or "complex behavior", we would never be unconscious
Non-sequitur.

If consciousness is a result of computation, then if we are not performing those required computations, we will not be conscious.

[2] If consciousness is a specific kind of physical phenomenon [which is the case I'm making here], reproducing it is not simply a matter of computer simulation.
Non-sequitur.

If consciousness is a specific kind of physical phenomenon, then simulation that phenomenon will produce consciousness.

Like with the water example, a dynamo, or photosynthesis, one needs to physically generate the real deal.
Category error.

Indeed, a blatant and obvious category error that has been pointed out dozens of times already, so often in fact that if you are still raising this as though it were a serious point of dispute I must infer that you haven't read any of the preceding posts in this thread including your own.

Simulated water acts exactly like real water in the simulation. Real water acts exactly like simulated water in the real world.

Information can cross between the simulation and the real world, or else it's not a simulation at all. Since we define consciousness by its response to information, consciousness in the simulation is identical to consciousness in the real world.

Scientifically and technologically speaking, I don't think we're there yet.
Argument from personal incredulity.
 
How can it not?
Well, the particular scenario I gave mapped an arbitrary machine A, which goes through a finite series of steps, to a Turing equivalent machine A' using NAND gates, then mapped this to a simple, standard machine N.

N performs the same calculations as A' (the same NAND gate computations), and A' the same calculations as A (through Turing equivalence). This would imply that if simply physically performing the calculations is sufficient, then N should produce the same sort of consciousness as A.

N is fully described by repeating the following states:
(00)->1, (01)->1, (10)->1, (11)->0

...that is, it exhausts the computations of each part of A', in order, and just repeats until there are a sufficient number of each calculations to map the calculations of the NAND gates of A' to. So the argument that just physically performing the calculations is sufficient translates to the argument that N will produce consciousness.

Since this is counter-intuitive, it demonstrates exactly what I claimed--that is, that it doesn't seem to be sufficient to physically perform the same calculations.
 
Last edited:
Correct. That is why we keep asking for a coherent description of how the brain might not be algorithmic.

~~ Paul

I've already given a trivial example of a non-algorithmic system - a Turing machine with a random number generator attached.

Non-deterministic, non-algorithmic systems are possible. We don't know if the brain is an algorithmic system. We do know that there is nothing in the description of a Turing machine that predicts consciousness. The only reason that anyone asserts that a Turing machine can be conscious is that it is assumed that the brain is a Turing machine, and that human beings are conscious.
 
Who is saying that algorithmic processing has the property of consciousness just because it is going on in the human brain? I'm not, nor are others. I'm saying that certain types of processing produce internal behaviors that we call consciousness, and that this would be the case in a computer as well as a human brain.

But the only reason that anyone ascribes consciousness to algorithms is because it occurs in human beings. It doesn't explain anything about machine behaviour. It doesn't make any predictions about what a computer program or robot will do. Everything that they do is already predetermined by the algorithm. Knowing that they are or are not conscious is of no use to anyone.

And the only reason I'm saying that it must be an algorithmic process is because nobody can describe a super-algorithmic process coherently, let alone in a way that convinces me that it is required for consciousness.


What more is going on except perhaps for randomness?

~~ Paul

Except perhaps... Exactly. It is possible for non-algorithmic systems to exist. I've given a trivial example - it's not useful, but then neither is a typical Turing machine. So if non-algorithmic systems can exist, how can we know that consciousness is purely algorithmic?
 
Is there a hierarchy within this class of complex behaviors?

Depends who you believe. It's been asserted that every switch is conscious, in a sense, and if you connect enough of them, then you get loads of consciousness. It's also been asserted that you need them connected in a particular special way, sorta kinda like a brain.
 
Since we define consciousness by its response to information, consciousness in the simulation is identical to consciousness in the real world.

What do you mean "we", white man?

We don't define consciousness by it's response to information. That's just the trick of assuming what consciousness is, and then using the definition to prove what we want.

Consciousness is defined as much by its interaction with the physical world as with information.
 
So as far as you are concerned Paul is not a conscious being?

He probably is. Human beings are generally very good at observing conscious behaviour in other conscious beings. We can tell that most of the participants in this discussion are conscious beings, for example, except for the bot that inserts monosyllabic responses after each sentence.
 

Back
Top Bottom