• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

My take on why indeed the study of consciousness may not be as simple

You seem to have missed the point.

If we are wrong about what the zero level is, then there are negative levels
That does not follow. If there is a mechanism that produces our consciousness, why does it then follow that another mechanism capable of producing consciousness is at a lower lever?

I think you are missing the point. Whatever produces the consciousness produces the consciousness. Whatever is not producing our consciousness is not producing our consciousness.
Here is the question I would like you to answer -- if we are in a simulation, is there any mathematical reason that a property of an entity in our level could not be replicated at a lower level?
I don't accept that there is a lower level - what is producing our consciousness is producing our consciousness.

Are you asking that if we are in a simulation, is there any mathematical reason that a property or entity in that simulation could not be reproduced by a simulation created by some other method?

The answer is that I don't know.
 
And remember that we have already established that we cannot rely on it to behave as the mathematics of information processing say it should.

Lol -- this is the same approach (albeit from the opposite side) that Lucas-Penrose tried to take.

Look, there is nothing about "the mathematics of information processing" that states that an algorithm will always do what you think it should do.

The "mathematics of information processing" tells us that an algorithm will always do what it will do. Nothing more, nothing less. What you think it will do is irrelevant.

So the fact that you think an algorithm should do arithmetic correctly, yet it does not, is only indicative of you not understanding the algorithm.
 
Are you asking that if we are in a simulation, is there any mathematical reason that a property or entity in that simulation could not be reproduced by a simulation created by some other method?

The answer is that I don't know.

Yes, that is the question.

The relevance to this discussion being that if there is no mathematical reason why not, and if we are in a simulation, then we should be able (with sufficient technology) to replicate every single property of every physical substance in the universe within a simulation of our own.

I am not aware of any such mathematical reason either.
 
By showing that a Turing-machine is a universal computational device.
Anything that is capable of computing -- of performing arithmetic, really -- is either equivalent to a Turing machine or is less powerful than a Turing machine.

Since humans are capable of performing arithmetic, they are either equivalent to or less powerful than a Turing machine.
But, again, we have established that when humans do arithmetic they do not behave as th e mathematics of information processing describe.

It is not enough that we have a TM equivalent of when we get the maths right, we need at least a TM equivalent of when we get the maths wrong.

And you have to demonstrate that the human brain only operates in terms of natural numbers.
 
Lol -- this is the same approach (albeit from the opposite side) that Lucas-Penrose tried to take.

Look, there is nothing about "the mathematics of information processing" that states that an algorithm will always do what you think it should do.

The "mathematics of information processing" tells us that an algorithm will always do what it will do. Nothing more, nothing less. What you think it will do is irrelevant.

So the fact that you think an algorithm should do arithmetic correctly, yet it does not, is only indicative of you not understanding the algorithm.
Well I was saying that the Lucas-Penrose argument was nonsense in this forum years ago so I am not saying that.

But you appear to be saying that the brain does what it does and the mathematics of information processing says that an algorithm does what it does and therefore the brain is an algorithm.

But drkitten is saying that we can include the brain in the C-T thesis because it is a computing device and by definition should behave as the mathematics of information processing say it should.

And you are telling me that the mathematics of information processing say something should behave the way it behaves.

It is circular.
 
Last edited:
Yes, that is the question.

The relevance to this discussion being that if there is no mathematical reason why not, and if we are in a simulation, then we should be able (with sufficient technology) to replicate every single property of every physical substance in the universe within a simulation of our own.

I am not aware of any such mathematical reason either.
But I know of no mathematical reason that we should be able to either.
 
OK, back to the desk check experiment.

The guys rechecking the desk check also causes an instant of consciousness to happen because he is doing exactly the same as the guys who created the original sheets.

So is there still an instant of consciousness if these guy start at the end and work back to the beginning?
 
OK, back to the desk check experiment.

The guys rechecking the desk check also causes an instant of consciousness to happen because he is doing exactly the same as the guys who created the original sheets.

So is there still an instant of consciousness if these guy start at the end and work back to the beginning?

Ah, a much more interesting question!

Not, there is not, because the algorithm would not be the same.
 
Well I was saying that the Lucas-Penrose argument was nonsense in this forum years ago so I am not saying that.

But you appear to be saying that the brain does what it does and the mathematics of information processing says that an algorithm does what it does and therefore the brain is an algorithm.

But drkitten is saying that we can include the brain in the C-T thesis because it is a computing device and by definition should behave as the mathematics of information processing say it should.

And you are telling me that the mathematics of information processing say something should behave the way it behaves.

It is circular.

No, I am simply saying that an algorithm doesn't always give the output you expect it to give you -- because human knowledge is limited, of course.

The Lucas-Penrose fallacy involves the assumption that the algorithm of a perfect mathematician must give the output Lucas-Penrose expect it to -- the output of knowing something and being correct. But it has been shown, in many rebuttals, that the fact of a perfect mathematician thinking it knows something correctly, and it actually knowing something correctly, are not one and the same -- because the algorithm that constitutes the mathematician can simply be wrong (wrong, as in, the algorithm isn't what everyone thinks it is).

And here you are suggesting the same sort of thing -- that because drkitten makes arithmetic errors his mind not might be an algorithm. Well, that doesn't follow, because his mind could be an algorithm that simply calculates arithmetic incorrectly.
 
And here you are suggesting the same sort of thing -- that because drkitten makes arithmetic errors his mind not might be an algorithm. Well, that doesn't follow, because his mind could be an algorithm that simply calculates arithmetic incorrectly.
And we could write an algorithm that would calculate arithmetic incorrectly.

Why does algorithm sound like environmentally friendly birth control?
 
And another thing with this level business.

If I have Blue Brain running a simulation of a Sparc system running a simulation of a PA-RISC running a simulation of an Intel, running a simulation of a power-pc running a simulation of a DragonBall running a simulation of a Z80.

I duly run Pong on the Z80, then which processor is running Pong?
 
And here you are suggesting the same sort of thing -- that because drkitten makes arithmetic errors his mind not might be an algorithm.
I am saying nothing of the sort.

I did not even claim the mind is not an algorithm, simply questioning the claim that it can be proved to be an algorithm.

You are shifting the burden of proof.

PixyMisa and drkitten are saying that they can prove that the mind is an algorithm.

Part of drkitten's reasoning was that since we can do arithmetic then we are a system that behaves as the mathematics of information processing say it should.

I point out that when humans do arithmetic they do not behave as the MoIP say we should.

And then you say, "yes we do, because the mathematics of information processing say that whatever we do is how we should behave"

As I say, circular.

You cannot point to our ability to do arithmetic and say "therefore we are Turing equivalent". You must go a level further down and say "that system behaves as the MoIP say it should".

But your argument seems to still be MoTP says an algorithm does what it does and the brain does what it does therefore the brain is an algorithm.
 
And we could write an algorithm that would calculate arithmetic incorrectly.
So the proof that the mind is an algorithm is that we are a system that behaves the way the MoIP says it should and the MoIP says that a system should behave like it behaves.

The mind behaves like it behaves.

Therefore the mind is an algorithm?
 
Last edited:
Part of drkitten's reasoning was that since we can do arithmetic then we are a system that behaves as the mathematics of information processing say it should.

I point out that when humans do arithmetic they do not behave as the MoIP say we should.

But you are wrong -- there is nothing in computation theory that states that an algorithm that does arithmetic needs to compute the correct results, only that an algorithm that does arithmetic correctly needs to compute the correct results.

drkitten's reasoning is simpler than you are making it out to be.

1) A Turing machine can compute arithmetic correctly only because of it's Turing equivalence (certain operations are required for generic arithmetic computation, and the Church-Turing thesis states that this set of operations is equivalent to a bunch of other stuff, etc).
2) A human brain can compute arithmetic correctly. The fact that we often do not is irrelevant -- we can on occaision.
4) Nothing is more powerful than a Turing machine.
5) Therefore the human brain is both at least as powerful as a Turing machine and no more powerful than a Turing machine. In other words, Turing equivalent.
 
But your argument seems to still be MoTP says an algorithm does what it does and the brain does what it does therefore the brain is an algorithm.

No, I never made an argument. I was simply pointing out that your criticism of drkitten's argument was invalid, because nowhere does the mathematics state that an algorithm must compute arithmetic correctly.
 
Can it be demonstrated? Can we correlate consciousness to awake states? Can we measure brain activity and test a subjects ability to perform tasks? Can we descriptively define consciousness?
Of course we can along the lines you describe, if we use a certain, very tightly circumscribed definition of consciousness. Merriam-Webster has a good one: (find it here

Main Entry: con·scious·ness
Pronunciation: \-nəs\
Function: noun
Date: 1629
1 a : the quality or state of being aware especially of something within oneself
b : the state or fact of being conscious of an external object, state, or fact
c : awareness; especially : concern for some social or political cause
2 : the state of being characterized by sensation, emotion, volition, and thought : mind
3 : the totality of conscious states of an individual
4 : the normal state of conscious life <regained consciousness>
5 : the upper level of mental life of which the person is aware as contrasted with unconscious processes

We would be best off sticking to 3, 4, and 5 (and maybe 2). Next, we examine research into the neural correlates of consciousness. Just to take one example, we might look at neural correlates of the first-person perspective. We can see from relevant research that "the brain regions involved in assigning first-person perspective comprise medial prefrontal, medial parietal and lateral temporoparietal cortex. These empirical findings complement recent neurobiologically oriented theories of self-consciousness which focus on the relation between the subject and his/her environment by supplying a neural basis for its key components" (Vogeley, K., and Gereon, F., R. Neural correlates of the first-person perspective. (2003). Trends in Cognitive Sciences, 7(1), 38-42.) In other words, we can find information about which areas of the brain are activated in first person and third person perspectives when performing various kinds of tasks, and how this changes in various pathological brain states, such as lesions in specific areas.

The problem I'm talking about can be illustrated pretty easily by what we find when we do a Google search on "definition of consciousness." Dear Zeus, what a mess. Eighty zillion different philosophical dogmas about what some kind of grand, vague, sweeping definition of "consciousness" supposedly is. This is not at all the same definition of consciousness as what we just saw above in Merriam-Webster.

There are many philosophical stances on consciousness, including: behaviorism, dualism, idealism, functionalism, reflexive monism, phenomenalism, phenomenology and intentionality, physicalism, emergentism, mysticism, personal identity etc.
Per Wikipedia... well, that's not exactly Merriam-Webster, now is it. Philosophical stances on consciousness are, at best, opinions about what various theories, facts, and the results of research should mean. Philosophical positions by definition must beg the question, or they couldn't stake out their positions. Voegely and Gereon's research has nothing to say about which particular philosophical position should be taken in regards to the neurological information gleaned from it, so the research doesn't beg the question.
[/quote]
And they are?

A lot of NDE research provides the best examples possible. It's actually kind of depressing to see it all divided so firmly along the lines of researchers' belief systems, because it would be so valuable to study the neurobiology of NDE's rather than focusing on whether or not they represent anything about "consciousness", and so few people are doing this. There was a long discussion about this exact issue in the D'Souza book review thread.

So things you disagree with are silly?

Randfan, you know better than to make that kind of logical error in an argument, and you know you do. Now come on. ;)
 
Last edited:
Westprog still has his work cut out for him. He still has to tell us what that missing thing is (he has said he does't believe in a ghost in the machine).

I didn't quite say that. I'm saying that the job of science is to look for a physical explanation of phenomena, and to assume that such an explanation exists.

I don't get quite what is meant by a "missing thing". I'm assuming that the likely physical explanation for consciousness is that it is, like the weather, a result of physical processes. The way to find how it works is to look at the physical processes.
 
ETA... well, it'll have to be a new post... because Dennett is a scientist as well as a philosopher, my criticism about what I think that scientists should be doing with their time and energy does apply to him. Basically, I'm just not impressed by the fact that his debates have consisted so overwhelmingly of let's-go-shoot-fish-in-a-barrel expeditions. If you can make Dinesh D'Souza look like an idiot, what kind of accomplishment is that? How astonishing would he look if he were up against an opponent who said, "Oh, yes, you're right. Of course consciousness arises from the brain. Now what? Do you have anything else substantive to say?" For more detailed opinions on Dennett, see the "human spirit" thread. More updates as time allows, but that will probably have to wait for Monday-- three thirteen-hour days are coming up!! And speaking of consciousness, I'll be working with Alzheimer's patients the entire time... ;)
 
Last edited:
So the proof that the mind is an algorithm is that we are a system that behaves the way the MoIP says it should and the MoIP says that a system should behave like it behaves.

The mind behaves like it behaves.

Therefore the mind is an algorithm?
Not my argument. If I can write an algorithm that can replicate human error then any argument that posits that human error falsifies the mind as an algorithm is false.
 
But you are wrong -- there is nothing in computation theory that states that an algorithm that does arithmetic needs to compute the correct results, only that an algorithm that does arithmetic correctly needs to compute the correct results.

drkitten's reasoning is simpler than you are making it out to be.
I was using his/her own words and the version you present is more complex that the way I presented it, so "simpler"?

So here is his/her own words:
Any physical object that processes information behaves as the mathematics of information processing describes -- by definition.

So where does that fit in in your rendition of the argument?
 

Back
Top Bottom