• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

My take on why indeed the study of consciousness may not be as simple

AkuManiMani said:
Don't look astounded. This is something that Pixy genuinely believes. And hes absolutely right...provided you're using his own personal definition of conciousness :rolleyes:

Well, if you've got a better definition of consciousness that you would prefer to use, you're welcome to present it. I bet Pixy will take fewer than ten lines to shred your definition beyond repair.

I'm going by the common definition of the word, as can be found in the english dictionary:

con⋅scious⋅ness [kon-shuhs-nis]
–noun

[...]

7. Philosophy. the mind or the mental faculties as characterized by thought, feelings, and volition.
 
Are you claiming that it isn't?
Personally I think that all that will happen is that some numbers will be written down on bits of paper that will be the same as the numbers a computer produces.

I see not even a scintilla of evidence, argument or proof or whatever for the claim and drkitten pretending that this claim was never made and that I am trying to claim that Strong AI is magic.

And you and drkitten getting tied in knots about what the basis of this proof will entail and whether everything is an algorithm, or algorithmic or whatever you finally decide it is.
If you have something more than the argument from personal incredulity to dispute this, by all means present it.
I am not making the claim you are. I don't have to prove you are wrong you have to prove that you are right.

All I know is that numbers will be written down on paper that will be the same as if they had been produced by a computer.

That is all. No real human consciousness. Just numbers on paper.
 
Last edited:
I don't necessarily find your words to be important.

Don't see how you can either when you can't even tell if this is a simulation or not.

So according to you, if we are in a simulation, then nothing is important anymore? Might as well throw in the towel, eh, since nothing is "real?"
 
All I know is that numbers will be written down on paper that will be the same as if they had been produced by a computer.

That is all. No real human consciousness. Just numbers on paper.

System and virtual mind replies: finding the mind
Systems reply. The "systems reply" argues that it is the whole system that understands Chinese experiences consciousness, consisting of the room, the book, the man, the paper, the pencil and the filing cabinets.
 
So I have no problems whatsoever that a conscious state lasting a billion years might seem like a half a second and I never said I did.

Well, you made a post in the form of

1) <assertion that sounds absurd at face value>

2) <query about who believes the assertion>

and included the "billions of years" in the <assertion that sounds absurd at face value>.

If you do not think there is anything strange about a billion year conscious state that seems like half a second to the consciousness, then why did you include the notion in that statement?

But it has nothing to do with relativity.

It doesn't need to, but it can.
 
All I know is that numbers will be written down on paper that will be the same as if they had been produced by a computer.

That is all. No real human consciousness. Just numbers on paper.

Speed, complexity and other minds: appeals to intuition
Speed and complexity replies. The speed at which our brains process information is (by some estimates) 100,000,000,000 operations per second. Several critics point out that the man in the room would probably take millions of years to respond to a simple question, and would require "filing cabinets" of astronomical proportions write down all of the numbers. This brings the clarity of Searle's [Robin's] intuition into doubt.
 
Last edited:
I was using his/her own words and the version you present is more complex that the way I presented it, so "simpler"?

My version has more statements than the verbatim one, but clearly it gives you less leeway in interpretation. And that makes it simpler because for some reason your interpretation of drkitten's words is confusing you.

I mean I hate to appeal to authority but everyone with an education in computation theory on this thread understands exactly what drkitten and pixy are saying here -- this really is an instance of you misunderstanding what they are writing.

So where does that fit in in your rendition of the argument?

It fits in perfectly?
 
And I never suggested there was. Not once.

But drkitten suggested that any system that processes information the way the MoIP says it should.

But if way the MoIP says it should means any behaviour of which the system is capable then his point is only trivially true.

I am not sure why you don't understand that point.

I do understand it. Perfectly. But then there is what drkitten has responded with -- that the MoIP actually tells us quite a bit.

What you don't understand is my point, which is that even though the MoIP explains everything about an algorithm, it does not follow that we will know everything about an algorithm. We can be wrong!

By the way, do you think that in general, it is possible for an algorithm to run on a non-algorithmic system?

I am not sure what you are asking, can you clarify?
 
Rocketdodger says that it doesn't. How does the MoIP limit the capabilities of a system?

Err, that is not quite true -- I didn't say that it doesn't. I said that MoIP states that an algorithm will do what an algorithm will do.
 
So according to you, if we are in a simulation, then nothing is important anymore? Might as well throw in the towel, eh, since nothing is "real?"

You claim that an arbitrary "what is real/what isn't real (a simulation)" distinction to be the only "what is real/what isn't real" distinction that is a true (or non-arbitrary) "what is real/what isn't real" distinction.

I just don't understand how this can be true (or non-arbitrary) when the distinction between what is real and what isn't real, after all... is arbitrary.
 
Personally I think that all that will happen is that some numbers will be written down on bits of paper that will be the same as the numbers a computer produces.
And since all methods of computation are equivalent, consciousness will necessarily result.

I see not even a scintilla of evidence, argument or proof or whatever for the claim and drkitten pretending that this claim was never made and that I am trying to claim that Strong AI is magic.
So what do you claim is the third option?

It's either working via the laws of physics, and a computational simulation will result in the same effects, or it's magic, and it won't.

Where is the third alternative? And don't bring up the simulated oranges nonsense; that's a category error and we've been over it a hundred times.

And you and drkitten getting tied in knots about what the basis of this proof will entail and whether everything is an algorithm, or algorithmic or whatever you finally decide it is.
No, that's just you.

I am not making the claim you are. I don't have to prove you are wrong you have to prove that you are right.
It's already established that either we're right or the Universe is logically inconsistent.

All I know is that numbers will be written down on paper that will be the same as if they had been produced by a computer.
And since those numbers represent the operations of a human brain, the results will necessarily be the same as the operations of a human brain.

This is unavoidable.

That is all. No real human consciousness. Just numbers on paper.
Sorry, no. If the numbers represent a model of the human brain, then when you run the model, you get a human mind.

That you find this somehow improbable changes facts not in the least. We run models of this sort all the time. Human consciousness is no different except in scale than a rat neocortex, and we've done that.

Either we can simulated it with a computer, and get the same effects, or it's magic. Which?
 
I'm going by the common definition of the word, as can be found in the english dictionary:
I notice that you chose the seventh of eight definitions, when the very first definition corresponds directly to mine (though mine is more precise).

Why did you do that?
 
Err, that is not quite true -- I didn't say that it doesn't. I said that MoIP states that an algorithm will do what an algorithm will do.
Indeed. The halting problem raises its scaly head again.

Robin, you cannot, in general, know what an algorithm will do without running the algorithm.
 
I don't understand exactly what is being claimed in the reverse-order desk check case.

Let's take this scenario. Suppose I wrote a program, call it N, that iterates constantly from (00) to (01) to (10) to (11) and back to (00). At each step, this program calculates the NAND of the arguments--that is, (00)->1, (01)->1, (10)->1, (11)->0 (I know everyone knows what NAND is, but here I'm just trying to lay down notation).

Given that I can build a machine A' using NAND gates equivalent to any machine A you build; that out of order desk check styled operations are allowed; and that every NAND gate in A' can be mapped, out of order, directly to a calculation in N that yields the same results; then would N be producing experiences, in an "out of order" conscious entity? (e.g., is it possible that the last 15 seconds of my conscious experience was really out of order calculations produced by such an N machine in some "scrambled" order?)

If so, why?

If not, why not? (What would be the significant requisite factor that makes N different from A'?)
 
Last edited:
what i'm getting at is it isn't either/or, but a question that regardless of the limit of the z80 user's ability to "percieve" the BB, reality eventually hands you the whole package, given a little more info is available to the programmer.
Or not. There are some things that we simply cannot know - Godel's Incompleteness Theorem being the perfect example.

In real life of course, the rest of the answer is available, and the z80 is seen for what it is.
Not always, no.

that is not to say I'm saying there are no limits, I don't know..... but we see the BB perhaps, or at least some of the emulators......?
Again, you might, or you might not. If you just have a screen and a keyboard that runs CP/M commands, that's all you know. That's how Real Computers(TM) work, in fact. You are presented with a virtualised slice of the computer's resources, and you can't see anything beyond that; you can't even know if there is anything beyond that.
 
I don't understand exactly what is being claimed in the reverse-order desk check case.
Yeah, it's not very clearly defined.

Assuming that the neurons in your brain make a certain sequence of state transition in computing two seconds of your conscious mind, if we run the same set of transitions backwards what happens?

Well, we start with all your memories of those two seconds in place, and we end up with them all gone. At any point in the process, the paper-you remembers things in the original chronological order. Since the calculations and hence the results are the same but reversed, you have to be unaware of the external order of processing, the time taken, the mechanism involved, and so in.

So if you run the simulation backwards, the simulated-you will experience those two seconds forwards - as far as it is concerned. Forwards, backwards, or in compeltely random order, it doesn't matter, as long as you don't change the results.
 
So if you run the simulation backwards, the simulated-you will experience those two seconds forwards - as far as it is concerned. Forwards, backwards, or in compeltely random order, it doesn't matter, as long as you don't change the results.
Well I understand that part, but there's something missing still in going down to N--either an explicit claim that N would produce my consciousness, or an explanation of something left out.

Assume that A is your simulation of a neuron's calculations--include whatever inputs happen to come in whenever/wherever they occur (just picking this for discussion--we could easily aggregate these... consider it a functional decomposition style analysis).

Now I build A', which is an equivalent simulation, using only clocked NAND's. So I want to use A' to define the "random" ordering, as follows. I have t NAND gates total. I will label all of them uniquely with integer numbers, from 1 to t. What is significant here is that I defined an ordering (let's further assume they have labeled inputs, and we always represent them in a certain order, so that we know 01 from 10, in case that matters).

During each clock cycle in A', p of my t NAND gates are computing 00, q are computing 01, r are computing 10, and s are computing 01. So let P1 through Pp be the NAND gates calculating 00, in order, Q1 through Qq be the ones calculating 01, in order, and so forth. For the next clock cycle, we do the same thing, but start with Pp+1, Qq+1, etc.

When we're done, I want to calculate the NAND gates in A', in this arbitrary order:
P1, Q1, R1, S1, P2, Q2, R2, S2, etc.

Once I hit the end of one of the P, Q, R, and S sequences, while I still have the other three to calculate, I'm just going to start running some other "program" in the background in a random order (effectively I want to just fill it in, so that I keep iterating).

Now I wind up with N, which is a program that's just calculating, in this order, (00)->1, (01)->1, (10)->1, (11)->0, and going back to start. So are you claiming that this program implements an out of order equivalent A? If we keep running it, will it produce a conscious mind--albeit an out of order one?
 
Last edited:

Back
Top Bottom