• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Are You Conscious?

Are you concious?

  • Of course, what a stupid question

    Votes: 89 61.8%
  • Maybe

    Votes: 40 27.8%
  • No

    Votes: 15 10.4%

  • Total voters
    144
Are you cognitively capable of explaining what the "error" is and why its an "error" ? :rolleyes:
Also, identify the categories.

It seems to me it would only be a category error if one first assumed the conclusion.
 
Well the whole exercise to create physical theories is to create quantitative models of the world. We can then use these models for things like simulations of physical systems. Of course, our theories will always be incomplete but they are still good workable approximations :)

It's not that theories aren't available - it's that something like the position of an electron doesn't seem to be something that is computable, even in principle.

I realise that this is a slight digression, and I promise to get back on track in time for exam week.
 
Are you cognitively capable of explaining what the "error" is and why its an "error" ? :rolleyes:

Because consciousness is SRIP, by definition. So if it's information processing, just like a computer, that means it's just like a computer. And all the experts agree on this - because if they don't they aren't experts.
 
Because, as I said earlier, it would imply that a moment of consciousness just like the one I am experiencing right now could result from millions of individual processes in millions of devices isolated from each other in space and millions of miles apart, so that their processes would complete years before the data that my conscious moment represents could physically be in the same place.
We did go through that in some depth, and although we didn't reach a certain conclusion, at this point I'm not convinced that that scenario actually follows from the computational model. At the least, it's not meaningful - the consciousness, if it is such, is entirely isolated from the Universe, and cannot be detected as a consciousness until it is brought back together again, a process that itself raises problems.

While it's an interesting question and I'm glad you raised it, it doesn't really pose any problems for the computational model in general.
 
Also, identify the categories.

It seems to me it would only be a category error if one first assumed the conclusion.
A category error is the error assigning properties to an item which it cannot possibly have - either by definition or by logical contradiction.

So if you were to suggest that a simulated orange tree would bear physical oranges, you would be making a category error. And if you object to the computational model by raising the analogy that a simulated orange tree doesn't bear physical oranges, you are either (a) making a category error or (b) presenting a strawman argument accusing the other party of making a category error - and in the process making a different category error yourself.

In thiss case, the error lies in insisting that a simulation of a physical process cannot produce the same results as the physical process because it is not the physical process. It can, of course, and does, of course, in the simulation. Why anyone imagines that it would happen anywhere else, or why anyone thinks anyone else imagines it would happen anywhere else I don't know. Why anyone imagines that this has any significance whatsoever for an informational process, I also don't know. The only response we've seen has been a repeated insistence on the lack of physical oranges.

It's a red herring - it's entirely irrelevant. And the insistence that it is relevant is a category error, one way or the other.

Because, for consciousness, whether or not it relies on some peculiar physics of the brain (it doesn't, but it doesn't matter either), a simulation of the physical process of the brain will produce a simulation of the physical outcome of that process - i.e. consciousness - and while there we cannot, of course, move physical objects in and out of the simulation, we can move information, and we can observer and confirm the presence of consciousness

We can, from the Church-Turing thesis and the broader Church-Turing-Deutsch thesis, and from quantum mechanics and the observed behaviours of the brain, which indicate first that everything iseffectively computable and even if that's not the case that the brain certainly is, simulate the brain sufficiently precisely to produce consciousness even without knowing how that works.

It's a proof of existence. We don't yet know how to build a conscious computer - or to be exact, we know of only one method, and it is messy, unreliable, and expensive - but we know it can be done. All the objections raised in this thread boil down to presupposing magic in the operation of the brain, or worse.
 
The functioning, yes. The experience itself, not necessarily.
Actually, I'd say no.

For the most part, we design systems so that we know how they work, so that we know that they do work and so that we can fix them when they break.

But you can design, say, a nuclear reactor without even knowing what atoms are, without, in fact, anyone knowing what atoms are, just using empirical data. We invented beer and bread without knowing what yeast is; brass and bronze without any grasp of the periodic table; we discovered evolution with scarcely a clue about genetics.

Pretty often we do just build something that works and figure out the details later.
 
It's not that theories aren't available - it's that something like the position of an electron doesn't seem to be something that is computable, even in principle.

I realise that this is a slight digression, and I promise to get back on track in time for exam week.

Alright, I think I see what you're saying: The physical world does not conform to strict algorithmic rules, so in principle, we could not faithfully calculate it's exact behavior on a Turing device.

Is that about right?
 
Last edited:
Actually, I'd say no.

For the most part, we design systems so that we know how they work, so that we know that they do work and so that we can fix them when they break.

But you can design, say, a nuclear reactor without even knowing what atoms are, without, in fact, anyone knowing what atoms are, just using empirical data.

I'm sorry but, nuclear reactors were not build merely by tinkering and happenstance. It took centuries of accumulated scientific empirical investigation, experimental ingenuity, and theoretical insight. It was only after humans acquired the requisite empirical knowledge and theoretical understanding that they could design working nuclear rectors.

We invented beer and bread without knowing what yeast is

People could not use yeast to make bread without being able to physically identify yeast.

brass and bronze without any grasp of the periodic table;

Humans who synthesized these substances could physically identify the materials needed to craft those alloys and understood their physical properties well enough to manipulate them effectively.

we discovered evolution with scarcely a clue about genetics.

Genes were logically inferable from the obvious observation that organisms inherit physical traits from their parents [the THAT of their existence]. However, genetic engineering had to wait until the physical constituents of genes were actually identified [the WHAT of their existence].

Pretty often we do just build something that works and figure out the details later.

I other words, you really don't know WTF you're doing :rolleyes:
 
Last edited:
A category error is the error assigning properties to an item which it cannot possibly have - either by definition or by logical contradiction.

So if you were to suggest that a simulated orange tree would bear physical oranges, you would be making a category error. And if you object to the computational model by raising the analogy that a simulated orange tree doesn't bear physical oranges, you are either (a) making a category error or (b) presenting a strawman argument accusing the other party of making a category error - and in the process making a different category error yourself.

In thiss case, the error lies in insisting that a simulation of a physical process cannot produce the same results as the physical process because it is not the physical process. It can, of course, and does, of course, in the simulation.

There is not "in the simulation"; a simulation is not a magical looking glass world. Physically speaking the only thing physically going on with the simulation is the switching mechanisms of whatever device is running it. The simulated "orange tree" does not exist in the computer, it exists in the imagination of the user.


Why anyone imagines that it would happen anywhere else, or why anyone thinks anyone else imagines it would happen anywhere else I don't know. Why anyone imagines that this has any significance whatsoever for an informational process, I also don't know. The only response we've seen has been a repeated insistence on the lack of physical oranges.

Or it could be that you're too thickheaded to even recognize the childish naivete of your conception of the world.

It's a red herring - it's entirely irrelevant. And the insistence that it is relevant is a category error, one way or the other.

Right. The fact that a simulation is just a representation is a "red herring". IMO, comments like these are proof positive that you have a seriously cognitive handicap of some kind.
 
Alright, I think I see what you're saying: The physical world does not conform to strict algorithmic rules, so in principle, we could not faithfully calculate it's exact behavior on a Turing device.

Is that about right?

Pretty much. That also implies that the universe isn't a Turing machine. If it is a Turing machine, it's a Turing machine running an emulation of a non-computable universe.
 
There is not "in the simulation"; a simulation is not a magical looking glass world. Physically speaking the only thing physically going on with the simulation is the switching mechanisms of whatever device is running it. The simulated "orange tree" does not exist in the computer, it exists in the imagination of the user.




Or it could be that you're too thickheaded to even recognize the childish naivete of your conception of the world.



Right. The fact that a simulation is just a representation is a "red herring". IMO, comments like these are proof positive that you have a seriously cognitive handicap of some kind.

If one accepts that the human brain is a Turing machine carrying out information processing, and if one further decides that consciousness is purely a matter of that information processing, then, yes, one can conclude that the principle of equivalence between Turing machines means that one will obtain exactly the same result if the "Turing" operation of the brain is duplicated on another type of system, exactly the same results will be produced.

Firstly, of course, we don't know this to be true. We cannot model what the brain does in any way precisely. We can't show that the brain works in any precise way.

However, there is a more fundamental objection based on the actual function of the brain. The brain is not an information processing device. It does not take in a set of calculations and produce a neat output. It's a control device that is constantly interacting with the world in a way that lacks neat boundaries.

As I've shown earlier in this thread, time dependent functions are an essential element of the operation of the brain. Time dependent functions do not exist in the realm of Turing machines. Rocketdodger has made three claims on this, AFAIAA-

  1. A Turing machine can simulate time dependent processes.
  2. Any implementation of a Turing machine will have time dependent features.
  3. Order in a Turing machine is essentially the same thing as time dependence.
  4. Something about general relativity.

It may (though I'm not certain) be that a pure Turing machine can simulate time dependent processing. It's certainly not true that a pure Turing machine can perform any time dependent function. A Turing machine is, by definition, outside the realm of time. And it is not true to say that order is equivalent to time dependence.

The physical implementation of the Turing machine is also irrelevant, since it's being claimed that the operation is independent of any particular physical implementation.

Since we know that in principle, the functionality of the brain cannot be carried out by a Turing machine, the idea that we know for certain that consciousness is a result of Turing machine operations is extremely dubious. Not only do we not know it to be true, but we should consider it a very unlikely hypothesis.

If we are to explore the arena of artificial brains, then we should do so on the basis of machines that could replace brains, at least in principle, (recognising that in practice this might not be possible) rather than pursuing theories about machines that we know cannot.
 
A category error is the error assigning properties to an item which it cannot possibly have - either by definition or by logical contradiction.
A category error (a term somewhat out of favour in philosophy post Urchfont and Bleaney) is any proposition or argument which ignores any categorical boundary (usually ontological categories).

Since the simulated orange tree is highlighting those categorical boundaries rather than ignoring them then it cannot possibly be a category error.

Even using your Wiki definition it is still not a category error since it is highlighting the inability to assign properties rather than assigning any properties.
a simulation of the physical process of the brain will produce a simulation of the physical outcome of that process - i.e. consciousness - and while there we cannot, of course, move physical objects in and out of the simulation, we can move information, and we can observer and confirm the presence of consciousness
Hmm... no - you see you are assuming that simulation of x == x. Category error.
All the objections raised in this thread boil down to presupposing magic in the operation of the brain, or worse.
I have to ask myself whether, if you were really so sure of yourself, would you really have to keep repeating this silly straw man?
 
We did go through that in some depth, and although we didn't reach a certain conclusion, at this point I'm not convinced that that scenario actually follows from the computational model.
It really does - nobody has shown me why run3 would differ from run2. And the only change I would have to make would be to propose that the clocks are synchronised taking into account the space-time geometry of their eventual location.

Otherwise it is perfectly clear that if this conscious moment I am experiencing right now could be run1 then it could be runs 2,3 and 4.
At the least, it's not meaningful - the consciousness, if it is such, is entirely isolated from the Universe,
By which you would have to conclude that your own consciousness was entirely isolated from the Universe when no-one was observing you.
and cannot be detected as a consciousness until it is brought back together again, a process that itself raises problems.
Do you mean that I cannot detect my own consciousness? I cannot conclude that I am conscious until someone else observes that my behaviour is consistent with consciousness?
While it's an interesting question and I'm glad you raised it, it doesn't really pose any problems for the computational model in general.
It does not, just so long as you are confident that you could be run4.
 
Last edited:
You seem to be firmly stuck in a conceptual mode [by will or by flub] thats preventing you from seeing what I'm getting at. Metaphorically speaking, what I'm trying to get you to do is step back and stop thinking merely in terms of the abstract symbolism you're using to count tally sticks and focus on the sticks as physical objects.
I see what you are getting at (consciousness as we know it currently only happens in brains, so it is something brain-specific), I just disagree that consciousness is necessarily brain-specific -- brains are evolved information processors, so (in principle) we can implement the information processing that brains do in some other suitable physical object.
Remember that computations are carried about by physical hardware.
Digital physics aside, of course.
Whatever a given IP system produces is by virtue of the interactions of it's physical constituents. Terms like "inputs", "ops", and "outputs" are just the functional labels we apply to what concrete objects are doing.
Yep.
Understanding, in the abstract, the computational ops that underly the symbolic representations on your calculator's screen is not the same as understanding the LCD thats displaying those symbols or knowing how to make one.
Right, that abstract understanding enables you to implement them on any suitable physical substrate that you have sufficient technical proficiency with.

Step back into metaphor with me because I really don't think you truly grok what it is I've been saying. Think of the mind as a computer monitor, consciousness as the illumination of the screen, and qualia as the various color pixels that are able to be produced by the display. Symbols on the screen are the products of the computations performed, but the actual display [i.e. the screen, the pixels, and the power used to light the screen] used to conveying those symbols is a product of the -physics- of the hardware.
Right. And in support of my viewpoint, any display that meets the minimum necessary technical requirements will do whether it is based on liquid crystals, lasers, mirrors, phosphor coated tubes, cuttlefish rhodopsins, whatever.

Interesting that you appear to view consciousness in a transmission/reception paradigm, though.

So do you think conscious experience is something going on in a magical ether realm of abstraction separate from the physical universe?
No, I just recognize that information processing does not depend on any particular substrate -- any substrate that meets the minimum requirements of being able to accept, store, transform, and output information will do.
Nescafe, if I take a pencil and write "1" on a piece of paper is it literally the number one? :rolleyes:
There is no literal physical number 1. That does not stop it from being a useful abstraction.

Your view of what we must do to understand consciousness seems as silly as insisting that the only way we can understand 1 is by understanding what exactly is happening with that graphite/clay mark on the paper at the atomic level.

Your point being?
Snark.

...is just a switching pattern on a computer that we use to symbolically represent an actual power plant.
To the same degree of oversimplification, consciousness is just a pattern of neural discharges in the brain.

Unless you have evidence for it being more than that?
 
Hmm, how about this:

(a) The content of conscious processes is information. This information is computable.

However, there is another essential property of conscious processes (perhaps this property should be rightly called the property of consciousness or being):

(b) There is something that it is like to be that information that makes up the conscious processes of Democracy Simulator, to use a (handy) example.

The conflict here seems to be between parties that are (a) theorists and parties that are (b) theorists. I hope I have been following correctly.
Of course, if you are an (a) theorist, in that consciousness is only computable information, then of course you are not going to agree with someone who also believes (b) to be true, as to whether consciousness is computable. If (a) and only (a) is true, then a simulation of conscious processes is consciousness. If, however there is something that it is like to be conscious, then we cannot be sure that a simulation will do.

Obviously I have pinned my colours to the mast and I do believe that (b) is also true - that there is something that it is like to be Democracy Simulator - I believe that this consciousness/being property is not information and not computable, it is a/the phenomenal reality. I would base this belief off three axioms (borrowed roughly from Objectivism):

1) There is existence
2) There is identity

and

3) There is experience of existence and identity.

Now 3) I would say is the phenomenal reality of being/consciousness.

Of course one may wish to attack these axioms, but the process is self-defeating as due to their axiomatic nature, in order to refute these axioms, one must recognise their validity.

I understand that there has been an attempt to do away with consciousness (b) but I think that the attempt to do so is similar in effect to attempting to do away with the concept of existence and on equally mistaken grounds. I am happy to go into this if anyone else wants to go down that road.
In short, would a simulation of existence be the same as existence? - obviously the informational content would be the same, but is there anything that it is like to exist? I think the 'no' answer to the (second) question has obvious flaws. In other words, one could have all the informational content of all existing things and yet one would still not know what existence was. Hence there is no existence? I don't think so.

To PixyMisa I would ask the question:
Is there anything that it is like to be PixyMisa?
 
Last edited:
In a really condensed form:

There is something incomplete in the definition of 'to be' in the same way that there is something incomplete in the definition of 'is'. This leads to circularity.

However we should not conclude that therefore there is no being and there is no existence, as this is absurd, but rather that these two things are the boundaries of the real.

We'll probably never understand 'what' they are as the 'what' questions come after the necessary assumption of is and be.
 

Back
Top Bottom