• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

My take on why indeed the study of consciousness may not be as simple

An algorithm, carried out on any system. Cannot do anything.
I see.

So how, exactly, did you post the post that you just posted? I believe that an algorithm may very well have been involved.

All that happens is that each step is evaluated one by one. Whatever is processing each step does so in complete isolation from the last step.
Uh, no. No. No. In fact, no. Also, no. In fact, I might go so far as to say no. Some might even say no. Or to put it more succinctly, no.

The "algorithm" is not examining anything. It is just sitting there in storage.
Are you making a distinction between an instantiation of an algorithm and a representation of an algorithm, or are you just making stuff up?

It seems trivially true to me.

Are you telling me that algorithms are not run step by step?
Apparently you're just making stuff up.

Yes, algorithms are run step by step. No, this in no way means, nor can it in any way mean, that each step is isolated from the others.

Are you telling me that whan a particular step is evaluated the processor goes and has a look at the whole algorithm just to check what is going on?
No, nor does it need to. But any given step of the algorithm can inspect or modify any other given step of the algorithm.

An algorithm is not a thing that can inspect anything. It is just a bunch of numbers.
Okay, again I have to ask you if you are making a distinction between an instantiation of an algorithm and a representation of an algorithm, or just making stuff up?

The CPU does not know what has come before when it processees an instruction.
No, but if can find out. Using... An algorithm!

But the data that the processor is processing at any particular time does not have all of that logic encoded into it.
No, but it doesn't need to. On any Turing-complete system, the algorithm can examine and modify its own steps.

If I add 4+2 and get 6, by the time the next instruction comes along the "4+2" cannot be deduced from the 6. The step sees 6 and nothing else.
Not to put too fine a point on it, wrong. The step can see whatever it is that the algorithm requires the step to see, assuming that the things it is supposed to see can be expressed algorithmically. This includes the previous step of the algorithm. Or the subsequent step. Or any other step.

Because algorithms can't do things.
Are you making a distinction... I expect you know the drill by now.

They are a bunch of numbers, that is all.
Yes, they can be expressed that way. See Godel's Incompleteness Theorem for what you can do with a bunch of numbers.

They are processed step by step.
Certainly.

One step at a time, the only information that the processor considers is the particular numbers it needs to process.
And since the algorithm itself can be expressed as a bunch of numbers, the algorithm can examine and modify itself.

If you are describing something that can inspect itself or do anything whatsoever you are not describing an algorithm.
Yes. Yes it is. That is what it is. An algorithm is what it is. What it is is an algorithm.

Go back to the Church-Turing thesis for a moment. Any algorithm can be implemented on a Turing Machine. And anything that can be implemented on a Turing Machine is an algorithm. A Turing Machine can modify its own operation. Therefore...

An algorithm does not do anything.
Are you making a distinction, et cetera, et cetera...

I mean for crying out loud - cyborg says algorithms have nothing to do with numbers, and you and PixyMisa tell me they are things with some internal power to inspect themselves.
Of course they do, Robin! Look at the definition of algorithm and the mechanism of the Turing Machine.
 
Quantum coherence at body temperature in body cells was found by Herbert Frolich.
No it wasn't. He proposed this. He never found anything of the sort. Quantum coherence at body temperature is so short lived that it plays no role in any neurological process.

One of the great pioneers in superstate physics, he described a model of a system of coupled molecular oscillators in a heat bath, supplied with energy at a constant rate. When this rate exceeds a certain threshold then a condensation of the whole system of oscillators takes place into one giant dipole mode, similar to Bose-Einstein condensation. A coherent, nonlocal order emerges.
The problem is that this does not happen.

I could say that we know that fairies exist because they gather together in Time Square every midsummer's day and build a tower to the Moon. I would be rightly judged to be crazy. Adding the word "quantum" makes it no less crazy.

Prior to that quantum physicist Fritz Popp discovered that biological tissue emits a weak glow when stimulated at the right energy levels.
So do Lifesavers.

Cell walls of biological tissue contain countless proteins and fat molecules which are electrical dipoles. When a cell is at rest these dipoles are out of phase and arrange themselves in a haphazard way. But when they are stimulated they begin to oscillate or jiggle intensely and broadcast a tiny microwave signal. Frolich found that when the energy flowing through the cell reaches a certain critical level, all the cell wall molecular dipoles line up and come into phase. They oscillate in unison as though they are suddenly coordinated. This emergent quantum field mimics a Bose-Einstein condensate and has holistic properties common to any quantum field.
That's not an "emergent quantum field" of any sort. That's being cooked.

It has been suggested that these ion channel oscillations in neurons are quantum phenomena which generate a Frolich like coherent electric field. There are ion channels ( protein molecules ) lining the membrane walls of individual neurons, which open or close in response to electrical fluctuations resulting from stimulation. They act like gates to let Sodium , Potassium and other ions through.
Not after you've cooked them they don't.

They are of a size to be subject to quantum fluctuations and superposition.
And of a temperature to make this entirely irrelevant.

Each channel as it oscillates generates a tiny electric field.
Everything does.

When a large number of ion channels open and close in unison, as they do when stimulated, the whole neuron fires or oscillates and a large scale electric field is generated across the neuron. Certain neurons act as pacemakers. When a pacemaker neuron oscillates in response to a stimulation, whole bundles of neurons oscillate with it.
Yes, because they are sending out electrical signals through the network. There is no quantum effect involved; there can be no quantum effect involved.

Neurobiologists have found that when a person sees an object all neurons in the cerebral cortex associated with the perceived object oscillate in unison regardless of their location in the brain.
If they are in the cerebral cortex, then they are in the cerebral cortex. To say that they oscillate in unison "regardless of their location in the brain" is an absurdity. They oscillate in unison because they are all in the same part of the brain and are wired together and sending signals to one another.

It has been suggested that the original ion channel oscillations are quantum phenomena which, as in the Frolich system above, generate a coherent quantum electric field.
They aren't and don't.

It is essentially a Bose-Einstein condensate.
Even if all of the above were true - and none of it is - it wouldn't be essentially anything of the sort.

Existence of such large scale coherent electrical fields across the brain may explain how a large number of disparate and distant neurons can integrate their information to produce a holistic picture.
Okay, there are a whole bunch of things wrong with this.

First, there is no such large-scale "coherent" electrical field across the brain. It does not exist. If it did exist, we could detect it. We don't.

Second, if it were too weak to be detected (somehow), it would be too weak to have any influence on distant - or even adjacent - neurons.

Third, if neurons were so sensitive as to be affected by this nonexistent field (which is physically impossible), it would be drowned out by everyday household electric fields, much less devices like MRI scanners. You'd lose consciousness every time you switched on the lights.

Fourth, we know how neurons act together - they send electrical impulses to each other. We can and constantly do observe this happening.

Fifth, the brain does not act as a whole to produce a holistic picture. In fact, it doesn't produce a holistic picture at all. We can actually trace the process of visual perception as it proceeds from the optic nerve to the primary visual cortex to the prestriate cortext and onwards, with the information being abstracted and remapped at each pass. We can time how long each step takes, and we know what function each step has in the overall process.

The proof fairly recently that nonlocal ( instantaneous or faster than light ) quantum correlations exists between particles apparently separated in space and time has helped researchers to understand these effects.
No. "These effects" that you so blithely speak of are not only physically impossible, they are not observed to happen, and could not explain observed brain function if they did. In fact, far from shedding light on consciousness, they would contradict everything we know.

The distinguishing and interesting feature of a Bose-Einstein condensate is that many parts which go to make up the ordered system not only behave as a whole , but they become whole.
The distinguishing and interesting properties of Bose-Einstein condensates also include the fact that they are fluids and within a couple of degrees of absolute zero, neither characteristic notably representative of brains.
 
Last edited:
Touché.

I'll just add one thing - while the brain does not produce a holistic picture, it does produce an illusion of one. Upon close examination, though, it quickly breaks down and reveals that there are lots of disparate pieces to the puzzle, all working imperfectly together, in effects like change blindness and inattentional blindness, various optical illusions like the McCollough effect, and heck, we can stick an electrode in a monkey's brain and find the neuron that specifically fires when the monkey sees hairy legs.

What we don't see is a coherent field of any description.
 
I see.

So how, exactly, did you post the post that you just posted? I believe that an algorithm may very well have been involved.
A computer or too maybe? Or just algorithms?
Uh, no. No. No. In fact, no. Also, no. In fact, I might go so far as to say no. Some might even say no. Or to put it more succinctly, no.
So I am running an algorithm. The current step is to add 27 and 95. What does that tell me about the last step?
Are you making a distinction between an instantiation of an algorithm and a representation of an algorithm, or are you just making stuff up?
Instantiation? That would be evaluating it and doing the calculations on it step-by-step would it? Or is there another way to instantiate an algorithm?
 
But the processor isn't the algorithm. The algorithm is the series of steps.
And I never said it was. The algorithm is the series of steps. The processor is whatever is processing the algorithm a person or a computer.

The processor processes the algorithm step by step.

If one step is to add two number from memory, 27 and 85, what does that tell me about the last step, or about how those two numbers were calculated?

The steps are done one at a time. Which is all I said.

Going to my example above, would you say that if the computer produced consciousness on the first run, then it would on the second, where the steps are saved as self contained programs with the register values and the before values of the relevant data items already set?
 
So I am running an algorithm. The current step is to add 27 and 95. What does that tell me about the last step?
That either there was no last step, or that the last step was part of the algorithm. A third possibility I do not see.
 
No, but if can find out. Using... An algorithm!
The CPU can find out stuff?

By the way, what about my example earlier?

The program goes through first time and produces consciousness and as it does so saves each executed step along with the register values and before data for each value calculated.

These are then run in order.

Is consciousness produced this time?

Will you grant me at least that algorithms are not so smart that they can examine the context switching routines under which they are run?

If it produced consciousness the first time it must produce consciousness the second.
 
Last edited:
That either there was no last step, or that the last step was part of the algorithm. A third possibility I do not see.
But I am asking you what that step was. You are telling me that the steps are not in isolation and yet you are unable to tell me even what the last step is by the current one.

If you were desk checking this algorithm and had just taken over would you need to know the last steps in order to continue?
 
Last edited:
And I never said it was. The algorithm is the series of steps. The processor is whatever is processing the algorithm a person or a computer.
Here is what you said:
An algorithm, carried out on any system. Cannot do anything.
And a sieve of Eratosthenes is nevertheless going to produce primes. That's all it would produce is primes, because that's what the sieve does.
All that happens is that each step is evaluated one by one.
And this is correct too, but:
Whatever is processing each step does so in complete isolation from the last step.
Well, okay. Let's try this. If I really am running the sieve of Eratosthenes, which is a particular algorithm, then I follow the rules of that algorithm. I do, indeed, do the rules one at a time.

So, for example, should I be on the step where I'm crossing out 5's, if I'm pointing to 26, what I will do is move over 5 numbers. I'll find 31. So I take 31 and cross it out. I do that in, presumably, "complete isolation", right?

But there's one problem. I never do this. I never, ever, ever ever ever, ever... ever... when performing the sieve of Eratosthenes... find myself on the number 26, and find myself going five numbers to the right, and crossing out 31.

The reason I never find myself doing that is because when I start crossing out five's, I'm going to start pointing to 5. I don't even really "know" I'm going to cross out 5's, necessarily, until I get there. And when I do get there, not considering anything else, I'll nevertheless be on 5.

...and because every step I do when carrying out the sieve depends on the previous steps, I'll go to 10, find it crossed out, then 15, then 20, then 25... 25 isn't crossed out, so I'll cross it out. And so on.

Yes, I'm doing one thing at a time. But most certainly, the thing that I'm doing is going to depend on the previous step, because I'm running the sieve.

Am I really debating this?
If one step is to add two number from memory, 27 and 85, what does that tell me about the last step, or about how those two numbers were calculated?
If there was a previous step, it was either a part of the algorithm you're running or you messed up. (And if you messed up, you're not running the algorithm! I.e., you're not running each step of the algorithm because you're not running the algorithm)
The steps are done one at a time. Which is all I said.
I don't disagree that they are done one at a time.
Going to my example above,
I'm not talking about consciousness. I'm talking about how information flows through a system that is performing an algorithm.

A sieve of Eratosthenes, be it carried out (properly) by silicon, human, or trained monkey, will never do anything but give you prime numbers. It works by being dependent on the previous step. The sieve is the algorithm, and anything that follows the algorithm will never count fives from 26.
 
Last edited:
A computer or too maybe? Or just algorithms?
So, an instantiation of an algorithm, then.

So I am running an algorithm. The current step is to add 27 and 95. What does that tell me about the last step?
It tells you absolutely nothing about the last step. Why should it?

I am running an algorithm. The current step is to check whether the last step was an addition or a subtraction.

What does that tell me about the last step?

Instantiation? That would be evaluating it and doing the calculations on it step-by-step would it?
Yes.

Or is there another way to instantiate an algorithm?
There are in fact other ways, but all are equivalent - that's the Chuch-Turing thesis.
 
Last edited:
The CPU can find out stuff?
Indeed it can.

By the way, what about my example earlier?

The program goes through first time and produces consciousness and as it does so saves each executed step along with the register values and before data for each value calculated.
Yes.

These are then run in order.

Is consciousness produced this time?
Yes. The exact same consciousness.

You can tell this by producing the appropriate mapping to alter the instruction and data stream so as to interject a question. You will necessarily get the same answer from the first run and the second run.

Will you grant me at least that algorithms are not so smart that they can examine the context switching routines under which they are run?
See the Church-Turing thesis. Unless the abstractions leak (and admittedly abstractions always leak) all Turing-complete computational systems are equivalent.

So in practice, yes, an algorithm can examine the context-switching routine under which it runs. In principle, no.

If it produced consciousness the first time it must produce consciousness the second.
Yes. Mind you, you've changed the representation so that it is very much harder to interact with the consciousness of the second run, but with the appropriate mapping you will find that it is indeed present.
 
But I am asking you what that step was.
You are asking us? Why are you asking us? You specified the algorithm. You need to specify the algorithm so that it is doing the asking.

You are telling me that the steps are not in isolation and yet you are unable to tell me even what the last step is by the current one.
No, that's trivial. We just have to change the algorithm so that the current step is to examine the previous step.

If you were desk checking this algorithm and had just taken over would you need to know the last steps in order to continue?
No, not unless the algorithm specified that. Which it is entirely valid for an algorithm to do, by the way.

But we'd need to know the results of the previous steps. Which is why it is nonsense to say that each step of an algorithm proceeds in isolation. Each step of an algorithm makes some change to the system in which the algorithm is running (or else, it's not an algorithm at all) and those changes can be used by subsequent steps.
 
Yes, I'm doing one thing at a time. But most certainly, the thing that I'm doing is going to depend on the previous step, because I'm running the sieve.

Am I really debating this?
It is a bit surreal. Is Robin arguing that algorithms can't actually do anything, or that they can't do what computers do, or that computers don't run algorithms, or what?

Robin?
 
Well, what is your definition? Pick one of the following

No. This is your question.

I don't think, say, a thermostat can do any of those because it doesn't have a mind. But you do. Care to explain?

I don't know what a "mind" is - some people seem to view it as some sort of disembodied entity existing separate from a brain. I don't think such things really exist.

[
A person understands they are playing a game.

I fail to see how this affects anything at all.

They can evaulate pieces, board position, and countermoves in a way that is impossible for computers to do.

This assertion is going to require backup - i.e. you are actually going to have to point to some specific thing a human can actually do that is non-computational with regards to playing chess.

With some sort of detail please - not just some "can I haz cheezeburger," appeal to irrelevant human biological functions.

Think. Um... "to form or have in the mind"? Do you think thermostats have a mind?

See above re: mind.

And computers are logical? I grant you they behave logically, but that does not entail they are logical.

The distinction, as far as I can see, has no justification. Of course you're working from sloppy anthropocentric premises so no surprise there.

Do they understand modus tollens or modus ponens?

Do you? And how do you justify that you "understand" exactly?

Can a computer come up with a logical argument why capital punishment is wrong?

Absolutely. Very easily.

Unless of course you're demanding that a computer rolling of the factory should do so spontaneously. In which case I would demand to know if a baby can do the same.

Something logical would understand there are no non-green green things, correct? Do you think a computer understands that?

Do you think a baby does? (Referring to the above).

Nothing. I thought you said unicorns don't exist (in fact, you did say this). Are you now saying they possibly exist?

What I said seemed perfectly simple to understand: unicorns do not exist. They are a fantasy. However there is nothing particular about unicorns that would prevent them from existing.

This is not hard to understand: there was a time before television existed. Its lack of existence did not preclude the potential of its existence.

Hmm? It's your claim that computers think when their voltage potentials change.

I made no such claim.

I merely pointed out that if one is going to say humans aren't doing some sort of calculations but computers are then I'm going to have to demand why you can see numbers in computers but not in the brain when there's certainly nothing physical in either system that is a "number" or a "calculation" and indeed there are certainly gross simularities in the operations of both systems at a low level.

This is the very crux of the issue.

Possibly, it could do so. But we'll never know. At least now you admit a chess computer has no knowledge it's even playing a game.

I cannot now admit something I never said.

Of course the chess computer has no knowledge that it's playing a game. I am not the one claiming that systems should magically be self-aware of their internal operation.

What I am claiming however is that there is no reason to think that a computer that is aware it is playing chess could not be devised. It would make absolutely no difference to the playing of chess however.


Yet you claim it can "think" (presumably about chess moves, even though it doesn't know what chess is). That's a pretty low threshold for thinking, isn't it? Are you trying to water down the definition so much that anything can "think"?

Since we're now quoting "think" it's clearly not precise enough.

You want a binary answer to a complex system. Not going to happen.

I submit that to "take something seriously", you have to be aware what the something is!

I see you failed to appreciate the fact that this was the musing of an external agent: i.e. without any understanding of the operation of the machine one might say, "this fellow takes his chess very seriously" - because nothing else matters. (Which is what someone obsessed with chess might look like).

Because the human is aware they're playing a game and the computer is not?

Irrelevant.

And I don't know about you, but when I play chess, I'm not thinking of some "chess playing algorithm".

You fail to grasp the point: execution of the algorithm IS NOT thinking about the algorithm.

So no, you are not thinking of it - you are simply doing it.

So when you think of a meal you enjoyed at a good restraunt, you think about complex polypeptides? Sucks to be your date! :)

No I do not. That is the point. These things occur without thought. Thought is not the primary point at which algorithms occur - you are placing it on a grand pedastal.

You cannot take subjective experience out of the equation.

I can and I will.

Either you accept I am conscious for non subjective means or you do not.

If your basis for doing so is irrational then there is no point discussing this with you.

If it is rational then you need better reasons. I find them crass.

Trying to pin down is fine. Asserting that consciousness IS A PROPERTY OF THE BRAIN/ END OF STORY is where I have a problem.

Yes - I would argue that is does require the brain to be doing particular things at particular times.

In summary, with a lot of tedious cuts to your responses because I cannot be arsed:

Either get to the point of what it is you think makes humans conscious, either explain why what that thing is is non-transferrable to other entities or accept that you're simply being irrational.
 
I see, so now algorithms have nothing to do with numbers?

At the least there is some sort of mapping that can be done. However you asserted that a computer must be inherently dealing with them when a brain must not. I do not see inherent numbers inside a computer. Please point them out to me.
 
Just to briefly divert things back to the magic fairy field theory of quantum consciousness, last time this came around I summed up the problems with this theory thus:

The mind doesn't behave that way.
The brain doesn't work that way.
There is no possible transmitter for such a field.
There is no possible receiver for such a field.
There is no such field.
It's physically impossible.

And illustrated the magnitude of the error of suggesting that quantum coherence plays any role in consciousness thus:

An error of fifteen orders of magnitude is magic fairy territory.

We've looked at what an error of fifteen orders of magnitude means:

* That the Empire State Building weighs as much as the planet Mercury.
* That you can take Lake Michigan home in a bucket.
* That you can cover the US federal budget deficit with a penny and have money left over to buy IBM, Microsoft, Google, Exxon, GE, Wal-Mart, and Vermont.
* That the Milky Way Galaxy consists of one small brown dwarf star.
* That you could eat an omelette made from all the eggs ever laid by all the chickens that have lived since they were first domesticated 10,000 years ago - and still be hungry.

Sorry, SnidelyW, it's a bit of a non-starter really.
 
Its one thing to say that a simulation behaves like the real thing; its quite another to claim that it IS the real thing -- which is exactly what PixyMisa has been doing.

Again, you may have missed that, but that's what I'M saying as well.

If a hypothetical computer/software acts in a way that it can learn and converse just like a human does, would you say that this "simulated intelligence" is not also real intelligence, since it has the very same effects/consequences ?
 
Don't know what Westprog is referring to, but there is a real difference in approach.

Human players look at a few dozen possible moves; for each move they evaluate the resulting pattern by looking it up in an associative array. That is, they maintain a large but generalised database of possible positions.

Most (almost all) modern computer chess programs use a look-ahead search algorithm, evaluating all possible moves as far ahead as possible (given the time and processing power allotted), calculating the value of each potential position and pruning branches of the search tree that result in a clear disadvantage. Other approaches have been tried, but so much computing power is available so readily today that this largely brute-force method has won out.

Not sure I see much of a difference. Granted, I'm not good at chess at all, but when playing any other game, I usually respond to a situation by evaluating the outcomes of a certain action (assigning it a value) before making it, and often thinking many moves ahead.

Now, the computer does it several moves ahead of me, and faster, but I'm not sure I see a difference in type.
 

Back
Top Bottom