• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

My take on why indeed the study of consciousness may not be as simple

Robin said:
And the comment I made that PixyMisa took such passionate exception to was that I would wait until the results were in before making my mind up.

Something wrong with that?
Nope.

Who said it wasn't? What you are suggesting is that evaluating a lot of sums in a particular order will lead to our conscious experience. But haven't said how.
Saying how means explaining exactly how it works. I have no idea.

So what is the mechanism then? What does it change when I add two numbers together and find the result. Wasn't the result the case before I added them?
Wasn't the electrochemical state of my brain the result before it became the current state? Consciousness is the process, not the state.

Not at all, you are shifting again.

The argument is that the mind must be an algorithm, that there is no alternative and that anybody who doesn't agree wholeheartedly with every single consequence of that must believe in magic unicorns.
What's the difference between the mind and the brain?

All I am saying is that the Church-Turing Thesis says that if some process P can do a computation on natural numbers then there is a Turing machine that can also do that computation and that the Turing machine is equivalent to the method that P uses to do the calculation.

It does not imply that a Turing machine is necessarily equivalent to P itself.

A category error as everybody is so suddenly fond of saying.
I don't think that the Church-Turing thesis talks specifically about computations on natural numbers, but instead about algorithms. But anyway, please clarify. What is the difference between being equivalent to the method used by a process and being equivalent to the process?

Well it is certainly non-intuitive that this unified conscious experience could be the result of millions of devices completely isolated from each other doing sums.
Wait a minute? Why are the devices isolated from each other?

3. You have suggested no mechanism as to why evaluating a lot of sums should lead to this should lead to the conscious experience I have.
I admit that I cannot provide the entire story. That's what neuroscience is for. But I don't understand why you see any difference between a process leading to consciousness and one leading to telling the time on the face of a clock, especially if you are not arguing that phenomenal experience is a mechanistic process.

~~ Paul
 
Robin said:
*Sigh* Processes involving non-discrete quantities.''

The are not, by definition, algorithms.

It is possible for a non-algorithmic process to implement an algorithm.

If this was not the case then everything would be an algorithm.

As I said before, if that is the case then why not just say so?
Yes, right. That is why I am asking for an example of a process involving nondiscrete quantities (= nonalgorithmic process) that might be necessary for consciousness.

Or are you trying to say that the brain might be nonalgorithmic but only include algorithmic processes?

~~ Paul
 
I'd love to know what computer you're using. It sounds very different to any that I've used.
That's because you don't know anything about how computers work. These days you don't need to, and most people don't.

This in itself is not a problem. But repeatedly ignoring the examples of exactly what you claim cannot exist does you little credit.
 
Robin said:
And since the result of any calculation was the case even before the calculation was made then the result of all calculations made by the algorithm would have been the case before the algorithm was even run.
This is the "running a program is simply proving a theorem" argument. But clearly there is some utility in actually running computer programs, or people wouldn't bother. So how can we describe concisely why it is that people run programs?

~~ Paul
 
I know that. I was simply stating -- and you seem to have missed that -- that some things that are simulated operate precisely like the real thing. I'm saying that consciousness could be like that.

Its one thing to say that a simulation behaves like the real thing; its quite another to claim that it IS the real thing -- which is exactly what PixyMisa has been doing.
 
Yes, right. That is why I am asking for an example of a process involving nondiscrete quantities (= nonalgorithmic process) that might be necessary for consciousness.
And also why there would be a difference between an arbitrarily close discrete simulation and the continuous process. (Leaving aside the physical issue of whether arbitrarily fine subdivision of space, time, and mass is meaningful.)
 
But it can't carry out conscious operations without being conscious.

No kidding, Sherlock. However I can't help but notice that you've avoided answering some rather simple questions I've asked you. What gives?

But repeatedly ignoring the examples of exactly what you claim cannot exist does you little credit.

Like self-referencing systems that aren't conscious? :rolleyes:
 
Last edited:
AkuManiMani said:
PixyMisa. He emphatically believes computer simulations are identical to the phenomena they're simulating. A simulation of photosynthesis actually fixes carbon, according to Pixy. A simulation of the solar system produces actual gravitational effects, according to Pixy. He seems to be under the impression that simply calculating aspects of physical phenomenon on a Turing machine is the same as reproducing the phenomenon; i.e. he believes the representation = reproduction.
Pixy said that a simulation of photosynthesis actually fixes real carbon? Wow, that would be bizarre. Where did he say that?

What he said is that it fixes simulated carbon within the simulation. Indeed, this does not help a real plant to thrive.

The argument being made is that consciousness is more like mathematics, in that a careful simulation of the brain would actually produce a simulated consciousness that is equivalent to a real consciousness. That is, consciousness is different from photosynthesis, because it is a purely computation thing. An example of something like it is money: Banks simulate the interchange of money with computers, even though no real cash is actually moving around.

So the question is this: Is there some aspect of real consciousness that would escape a careful simulation on a computer? If people think so, it would be cool to get a description of what that aspect might be (something more than "it might be randomness"). Of course I will stipulate that the inability to give an example does not mean that there is no such aspect.

Observation: If you don't agree that all brain functions are entirely mechanistic, then all bets are off.

~~ Paul
 
Last edited:
I'm asking you for a summary by you. What, in your opinion, is so different; and why isn't it simply a matter of degree ?
Don't know what Westprog is referring to, but there is a real difference in approach.

Human players look at a few dozen possible moves; for each move they evaluate the resulting pattern by looking it up in an associative array. That is, they maintain a large but generalised database of possible positions.

Most (almost all) modern computer chess programs use a look-ahead search algorithm, evaluating all possible moves as far ahead as possible (given the time and processing power allotted), calculating the value of each potential position and pruning branches of the search tree that result in a clear disadvantage. Other approaches have been tried, but so much computing power is available so readily today that this largely brute-force method has won out.
 
No kidding, Sherlock. However I can't help but notice that you've avoided answering some rather simple questions I've asked you. What gives?
What gives is that I've put you on ignore. I'm sure if you were to say something apposite - or even coherent - it would bubble up in a quote somewhere.

Unfortunately, Paul tends to lose the links when he quotes people. So I looked at this comment and...

Like self-referencing systems that aren't conscious? :rolleyes:
Looks like I won't need to reply to you any time soon.
 
Pixy said that a simulation of photosynthesis actually fixes real carbon? Wow, that would be bizarre. Where did he say that?
Apparently this was a simulated Pixy located deep within AkuManiMani's febrile imaginarium. Or else he just never bothered to actually read any of my posts.

What he said is that it fixes simulated carbon within the simulation.
Indeed, I put it in italics each and every time. My fault, it wasn't in BOLD RED ALL-CAPS 40-POINT.

Not that that typographic approach has been exactly a rip-roaring success either.

So the question is this: Is there some aspect of real consciousness that would escape a careful simulation on a computer? If people think so, it would be cool to get a description of what that aspect might be (something more than "it might be randomness"). Of course I will stipulate that the inability to give an example does not mean that there is no such aspect.
"Mysterious" and "qualia" are also not considered in themselves to be valid objections.
 
Could you elaborate, please ? I would genuinely like to know.

The computer used a combination of simple raw brute force and a slew of clever heuristics that it's programmers threw in from observing master chess matches.

In other words, the algorithm simply modeled all possible subsequent states of the game beyond an upcoming move and would choose the move that whatever metric it was using to evaluate future states considered best. I don't recall the exact number but the computing power available allowed it to model like 30 + moves ahead.

Humans, though, apparently categorize board states -- and more importantly series of board states, and the relationships between them -- in different phases (as in broadphase vs. narrowphase) or in other words get a "bigger picture" of each board state. This allows us to model the future of the game with admittedly much less accuracy but using much less computation. And at the time when Kasparov was pretty much equal with Deep Blue, apparently our less accurate but less expensive approach allowed him to look ahead just enough further, with less accuracy, to match Deep Blue's ability to look ahead with perfect accuracy.

Of course there is lots of other stuff going on, but this was the big take home lesson.

I would also like to add that even though the algorithm being used was brute force, Kasparov has commented on feeling an alien intelligence behind the machine based on how clever some of it's moves seemed to him.
 
Last edited:
No, why do you ask?

I am one of those people who think that evaluating an arithmetic expression helps you find the answer and nothing else.

Because you seem to be ignoring the fact that any time you evaluate an arithmetic expression, what is actually happening is that physical particles are behaving in a certain way.

Is the first time you evaluated 1 + 1 different from any other time? Yes and No.

Yes, in that the particles involved were different.

No, in that every instance was isomorphic to the other -- hence the term "instance."

But this does not imply that somehow the expression exists independently of any instances. There is no abstract world of abstract entities -- that ain't physicalism. The abstract is merely an equivalence class of real physical instances.
 
I know that. I was simply stating -- and you seem to have missed that -- that some things that are simulated operate precisely like the real thing. I'm saying that consciousness could be like that.

Go even further: how can you simulate consciousness without producing consciousness? Does an unconscious consciousness simulation make sense?

When we simulate a tree, we're just making a program based on the behaviors and physical properties of a tree, as we observe them . But what are the behaviours and physical properties of consciousness? I can't divide my consciousness. It's not made of anything.

We also can't observe another's consciousness. We rely solely on our own consciousness to tell us what consciousness is. I assume other people are conscious, but I have no way of knowing how they really experience the world, or if they're even conscious at all. Their behavior when the top light of a traffic light is lit doesn't tell me anything about how they perceive that color. Maybe "red" to them actually looks "blue" to me, or some color I can't even imagine. No matter how sophisticated a simulation of consciousness is, wihtout the ability to observe another's consciousness, it will always be impossible to tell if the simulation is truly conscious or not.
 
No, electrical capacity is not any more physical than switching.

You just have to think of a physical definition that works -- and I did. The fact that westprog won't have it is irrelevant.

A functional description is always less descriptive than a physical description. That's what makes it useful. A functional description is an excellent way to extract whatever property of a system interests us, and apply it to different physical situations.

It's precisely this ability to discard irrelevant information that makes a functional description entirely useless in predicting which side-effects might possibly occur in both systems.
 
The computer used a combination of simple raw brute force and a slew of clever heuristics that it's programmers threw in from observing master chess matches.

In other words, the algorithm simply modeled all possible subsequent states of the game beyond an upcoming move and would choose the move that whatever metric it was using to evaluate future states considered best. I don't recall the exact number but the computing power available allowed it to model like 30 + moves ahead.

Humans, though, apparently categorize board states -- and more importantly series of board states, and the relationships between them -- in different phases (as in broadphase vs. narrowphase) or in other words get a "bigger picture" of each board state. This allows us to model the future of the game with admittedly much less accuracy but using much less computation. And at the time when Kasparov was pretty much equal with Deep Blue, apparently our less accurate but less expensive approach allowed him to look ahead just enough further, with less accuracy, to match Deep Blue's ability to look ahead with perfect accuracy.

Of course there is lots of other stuff going on, but this was the big take home lesson.

I would also like to add that even though the algorithm being used was brute force, Kasparov has commented on feeling an alien intelligence behind the machine based on how clever some of it's moves seemed to him.

Humans think Oh **** when they make a bad move. Computers don't.
 
PixyMisa for a start. Paul for second.

A simulation is a mathematical representation.

Ah right my brain quit for a second.

I meant to express the sentiment that what you think a mathematical representation is, is not what it is.

A mathematical representation is merely a physical system that is isomorphic in behavior to another physical system in such a way that allows an intelligent entity (in this case, us) to model some aspect of said behavior.
 
So consciousness is non-temporal?

A computer can produce consciousness with a simulation, but if you run it the second time with precisely the same values it won't create conscious, just relate in some mysterious way back the the consciousness from the first run?

Again - what is the mechanism to produce anything except for numbers?

This is a concept of consciousness which is entirely independent of the physical world, of any sense of time - it's taking place in its own untouchable world.

This is a theory that becomes more and more mystical the more it's looked at. If the algorithm is run a billion light years away, then that somehow tells the next occurence here that it needn't produce consciousness. Unless one bit changes. A change of one bit makes it an entirely new consciousness, but running it a trillion times slower doesn't make any difference.

This is all fun, but it's entirely arbitrary.
 

Back
Top Bottom