• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

My take on why indeed the study of consciousness may not be as simple

As I said in my response to Pixy's backwards thing, the "desk check" is no longer the same information processing as the original forwards run. Because of exactly what you said -- you can't predict the inputs of an arbitrary calculation (although I wouldn't word it that way).
Well, my position is predicated on the assumption that it is actually possible - which is not necessarily the case.

Especially if we are just playing the states backwards. A whole chunk of the original processing -- determining the next transition from within the system -- gets replaced by whatever mechanism is doing the playback, which is necessarily outside the system.
I don't see how this can make any difference.

In particular, I defy Pixy to come up with way to satisfy the notion of self-reference in the reverse playback scenario.
That's actually trivial.

There are states in the data that are produced by self-reference. If we are playing back the data in reverse (somehow), those states are still present. Therefore the self-reference is still working, therefore consciousness is active exactly as before. That sequence of state transitions doesn't suddenly stop meaning consciousness because the way we are generating them changes. That's the Church-Turing thesis.
 
Unfortunately, that's just the argument from personal incredulity.
Not exactly, because all I'm arguing is that it's counter-intuitive. Again, look at my claim!

Nevertheless, what I'm looking for is still not quite revealed by a claim that N is sufficient... there are a slew of other machines I have in mind that slice even deeper into what the significant factor is. For example, what if it doesn't repeat? If it just does (00)->1, (01)->1, (10)->1, (11)->0, and stops? I can still map to these calculations.

And what if it only does this? (00)->1. I can still map to that calculation (by choosing different "physical layer" mappings).

And, furthermore, please be explicit... if I just run N, and didn't bother doing the mappings, would I have implemented a consciousness in a random order?

The questions are designed to feel for the significant factors, and I don't think you've told me yet.
 
Last edited:
Thats not quite what I'm getting at. I'm pointing out that every bit of living tissue in our bodies actually does compute all the time.



Keep in mind, that when I talk of consciousness I'm not simply referring to the capacity to manipulate numbers. I'm talking about the capacity to -experience- information being processed as having some subjective quality. Its one thing to create a system in which an input is computed, its quite another to have a system that experiences that input as a sensation or emotion. I'm of course referring to:

qua⋅le  [kwah-lee, -ley, kwey-lee]
–noun, plural -li⋅a  [-lee-uh]
Philosophy.
1. a quality, as bitterness, regarded as an independent object.
2. a sense-datum or feeling having a distinctive quality.

One cannot simply simulate sensation. A sensation is either produced, or it is not. What we do not know is what makes humans and other organisms sensible, let alone how to reproduce sensibility.

...snip...

No longer true, we can now do that with electronics.
 
AkuManiMani said:
One cannot simply simulate sensation. A sensation is either produced, or it is not. What we do not know is what makes humans and other organisms sensible, let alone how to reproduce sensibility. Simply reacting to a stimulus to produce an output is clearly not sufficient, as our own brains and bodies respond to stimuli all the time, with or without being conscious.
What if some of the reactions to a stimulus are "internal output"?

You say these words as if they put a clincher on the question, but they do not. Everyone agrees that the behaviors that we call consciousness are qualitatively different from other sorts of behaviors. But that does not mean that they cannot be the result of perfectly standard brain function.

As I already pointed out, only a particular kind of tissue [in humans atleast] seems able to produce this capacity, and then only within a particular range of states. I'm of course referring to neural tissue. Like every other tissue line, they form an intercellular network of communication that processes information and coordinates biological activity. Whatever is it about -these- group of cells that allows them to produce conscious experience, isn't simply a matter of processing information.
Why, why, and why?

It stands to reason that it must be the physical context in which the information is being processed that translates it into what we call qualia. This means that, like electricity or water, consciousness has essential physical properties that cannot be reproduced via simulation.
Why?

~~ Paul
 
Thats not quite what I'm getting at. I'm pointing out that every bit of living tissue in our bodies actually does compute all the time.
Near enough, yes. So?

Keep in mind, that when I talk of consciousness I'm not simply referring to the capacity to manipulate numbers. I'm talking about the capacity to -experience- information being processed as having some subjective quality.
You mean self-referential information processing.

Yes.

Its one thing to create a system in which an input is computed, its quite another to have a system that experiences that input as a sensation or emotion.
Yes. Self-reference.

I'm of course referring to:

qua⋅le  [kwah-lee, -ley, kwey-lee]
–noun, plural -li⋅a  [-lee-uh]
Philosophy.
1. a quality, as bitterness, regarded as an independent object.
2. a sense-datum or feeling having a distinctive quality.
Sorry, qualia are baloney.

One cannot simply simulate sensation.
This too is baloney. Of course you can simulate sensation.

A sensation is either produced, or it is not.
Which has no bearing on whether you can simulate it or not.

What we do not know is what makes humans and other organisms sensible, let alone how to reproduce sensibility.
Of course we do. At the very worst, we just simulate it at the quantum level, which we can do perfectly well, albeit slowly.

Simply reacting to a stimulus to produce an output is clearly not sufficient, as our own brains and bodies respond to stimuli all the time, with or without being conscious.
Well, yeah. As I pointed out some years ago, and Hofstadter pointed out some decades ago, that's the difference between reference and self-reference.

As I already pointed out, only a particular kind of tissue [in humans atleast] seems able to produce this capacity, and then only within a particular range of states.
Rubbish. Computers do this all the time.

I'm of course referring to neural tissue.
Which we can and do simulate at all sorts of levels.

Like every other tissue line, they form an intercellular network of communication that processes information and coordinates biological activity. Whatever is it about -these- group of cells that allows them to produce conscious experience, isn't simply a matter of processing information.
It's self-referential information processing. You seem to have an infinite capacity for ignoring that point.

It stands to reason that it must be the physical context in which the information is being processed that translates it into what we call qualia.
No, that doesn't follow at all.

This means that, like electricity or water, consciousness has essential physical properties that cannot be reproduced via simulation.
For electricity and water that's a category error, and for consciousness is simply untrue.

I award you no points, et cetera, et cetera.
 
Not exactly, because all I'm arguing is that it's counter-intuitive.
Many things are counter-intuitive. I don't see anything in what you've said, though, that contradicts my position.

If it is possible (which may or may not be the case, but let's assume it is) to run the algorithm backwards, we can interject information to query the state of the consciousness under simulation and we will get an appropriate response.

Of course, to actually do that we have to know what the response is and how to calculate backwards from there to the state just prior to the query.

It's not the generation of consciousness that's counter-intuitive, it's going backwards in time. You have to start with the answer and work out the question. Everything about that is going to be counter-intuitive even without the complexities of consciousness thrown into the mix.
 
I just thought it was ironic that someone who mindlessly asserts "Wrong" every other post while constantly making baseless assertions would criticize others for the very same thing.
As I've said before, if you don't like people pointing out that you're wrong, best not to be wrong so often.
 
The "behavior" I'm referring to isn't just the capacity to process and react to stimuli, but to experience stimuli as some quality/sensation.
Self-referential information processing.

The only known instances of consciousness are limited to systems of a particular composition.
False.

We have no means of directly observing consciousness in systems other than our own bodies.
Or, indeed, in our own bodies.

Therefore its crucial to understand what physical properties of our physiology produce consciousness in us so that we can use that knowledge to identify it in other systems.
Self-reference.
 
Many things are counter-intuitive. I don't see anything in what you've said, though, that contradicts my position.
You're missing the entire point of the exercise, especially including what the exercise is. This isn't being offered as a counter to your position of backwards consciousness.
If it is possible (which may or may not be the case, but let's assume it is)
There's no "if it's possible" about it. It's possible to perform the mapping. I showed exactly how to do it. Furthermore:
to run the algorithm backwards,
N isn't running the algorithm backwards. N is concerned with the constraint that it physically performs every calculation that an equivalent machine to A performs. And it meets this requirement.
we can interject information to query the state of the consciousness under simulation and we will get an appropriate response.
N doesn't allow you to interject anything new--you can't use N to discover something that A' didn't do. N's only good for a post hoc desk check.

Nevertheless, it still physically performs every calculation that A' did--in the sense that you can point to a calculation that A' does, and point to it specifically being performed by N.

The purpose of N is for you to state more precisely what you're requiring, which you're hinting at, but not actually saying. But keep in mind that it's N producing consciousness that I'm claiming is counter-intuitive, not the backwards machine.
Of course, to actually do that we have to know what the response is and how to calculate backwards from there to the state just prior to the query.
Yes. But I'm not actually disagreeing about the backwards machine--that's not even what I'm trying to get at. In fact, I think I did just the opposite in my reply to rocketdodger.
It's not the generation of consciousness that's counter-intuitive, it's going backwards in time.
That might be counter-intuitive to someone, but it has nothing to do with what I was saying was counter-intuitive.

I think you need to read my posts a bit more carefully.

It seems like you're siding with the interrelationships between the calculations being important, as rocketdodger did. And that fits my intuitions. But I gave another possibility as well.

The point of the exercise though is for you to state your position, not for me to counter it. I'd be interested in your reply, but I think you need to reread the scenarios. They aren't saying what you think they are.
 
Last edited:
What we do not know is what makes humans and other organisms sensible, let alone how to reproduce sensibility.

No longer true, we can now do that with electronics.

Mmm... Stimulating living brain tissue isn't what I had in mind. I'm talking about producing a system from scratch that can experience the same thing :-X
 
AkuManiMani said:
One cannot simply simulate sensation. A sensation is either produced, or it is not. What we do not know is what makes humans and other organisms sensible, let alone how to reproduce sensibility.

What if some of the reactions to a stimulus are "internal output"?

You say these words as if they put a clincher on the question, but they do not. Everyone agrees that the behaviors that we call consciousness are qualitatively different from other sorts of behaviors. But that does not mean that they cannot be the result of perfectly standard brain function.

I think we may have a bit of a misunderstanding here. The point I was emphasizing is that consciousness is qualitatively different from other behaviors, which we happen to agree on anyway. I wasn't denying that its a standard brain function; I was pointing out that we do not know exactly what it is about the brain that generates it.


AkuManiMani said:
As I already pointed out, only a particular kind of tissue [in humans atleast] seems able to produce this capacity, and then only within a particular range of states. I'm of course referring to neural tissue. Like every other tissue line, they form an intercellular network of communication that processes information and coordinates biological activity. Whatever is it about -these- group of cells that allows them to produce conscious experience, isn't simply a matter of processing information.

Why, why, and why?

AkuManiMani said:
It stands to reason that it must be the physical context in which the information is being processed that translates it into what we call qualia. This means that, like electricity or water, consciousness has essential physical properties that cannot be reproduced via simulation.

Why?

~~ Paul

Are you asking "why" to the premises or the conclusion?
 
Last edited:
AkuManiMani said:
Keep in mind, that when I talk of consciousness I'm not simply referring to the capacity to manipulate numbers. I'm talking about the capacity to -experience- information being processed as having some subjective quality.

You mean self-referential information processing.

Yes.

Yet our entire physiology performs SRIPs continually, even when we are unequivocally unconscious. By your definition, homeostasis is consciousness. Clearly, we are referring to two different things.


Yes. Self-reference.

Wrong.


qua⋅le  [kwah-lee, -ley, kwey-lee]
–noun, plural -li⋅a  [-lee-uh]
Philosophy.
1. a quality, as bitterness, regarded as an independent object.
2. a sense-datum or feeling having a distinctive quality.

Sorry, qualia are baloney.

So you -don't- experience sensations with distinctive qualities? It never ceases to amaze me how you can so confidently make the most asinine assertions without batting an eye...


This too is baloney. Of course you can simulate sensation.

According to you, sensations are "baloney". So how is their ability to be simulated a given? :rolleyes:


What we do not know is what makes humans and other organisms sensible, let alone how to reproduce sensibility.

Of course we do. At the very worst, we just simulate it at the quantum level, which we can do perfectly well, albeit slowly.

Excuse my English, but what the hell are you talking about?


Well, yeah. As I pointed out some years ago, and Hofstadter pointed out some decades ago, that's the difference between reference and self-reference.

SRIP is crucial to all biological functions and our entire physiology continuously maintains them whether we're conscious or not. Constantly repeating that consciousness = SRIP does not make it any more true, and appealing to Hofstadter as if his words are holy writ does not make your position any more sound. Just drop it, dude.


For electricity and water that's a category error, and for consciousness is simply untrue.

I award you no points, et cetera, et cetera.

Oh, noes! I've lost the game :rolleyes:
 
Last edited:
You're missing the entire point of the exercise, especially including what the exercise is. This isn't being offered as a counter to your position of backwards consciousness.
There's no "if it's possible" about it. It's possible to perform the mapping. I showed exactly how to do it.
No. Your proposed method doesn't work at all.

N isn't running the algorithm backwards. N is concerned with the constraint that it physically performs every calculation that an equivalent machine to A performs. And it meets this requirement.
One problem is that your N is not physically possible to carry out for any even moderately complicated logic network. Let's take a Z80. Ignoring external memory and peripherals and only considering the programmer-visible registers, it has 2208 possible states.

That doesn't in itself negate the argument, but when dealing with impossiblities you have to be very careful in asserting that something is "counter-intuitive". It would be counter-intuitive for the result not to be counter-intuitive.

N doesn't allow you to interject anything new--you can't use N to discover something that A' didn't do. N's only good for a post hoc desk check.
N isn't any good for that either. N doesn't actually tell you anything about anything.

Nevertheless, it still physically performs every calculation that A' did--in the sense that you can point to a calculation that A' does, and point to it specifically being performed by N.
It's a one-way mapping. N is useless for, well, anything.

The purpose of N is for you to state more precisely what you're requiring, which you're hinting at, but not actually saying. But keep in mind that it's N producing consciousness that I'm claiming is counter-intuitive, not the backwards machine.
N produces consciousness. N produces unconsciousness. N produces everything that can be represented in the system that N is mapping. The problems is that because N produces everything, you can't find anything in there without a map, and that map already contains what you are looking for.

The results of N cannot be of any value under any situation.

The point of the exercise though is for you to state your position, not for me to counter it.
The problem is, your exercise is not meaningful.

If a given set of states represents a given interval of consciousness, a superset of those states contains that representation, but unless you have a way of getting just that set out again, the superset doesn't necessarily represent anything meaningful at all.

This is not restricted to consciousness, of course; it applies to information in general. Your argument doesn't work under any situation.
 
Yet our entire physiology performs SRIPs continually, even when we are unequivocally unconscious.
What part of our physiology are you talking about?

If you mean the brain, then yes, the brain performs self-referential information processing when you are asleep. That means that you are conscious when you are asleep.

If that's your objection, then it's simply equivocation. Conscious vs. unconscious in the sense of aware vs. unaware is simply not the same as awake vs. asleep.

By your definition, homeostasis is consciousness.
No, not at all. It's just a feedback loop. The feedeback loop doesn't reference itself.

One thermostat is not conscious. Two thermostats, though, could be.

Clearly, we are referring to two different things.
No, you're just failing to grasp the concept.

Then you will be able to provide me with some example of something to do with experience that doesn't immediately reduce to self-referential information processing?

I'll wait.

So you -don't- experience sensations with distinctive qualities?
No. I experiences sensations with distinctive quantities. Only this, and nothing more.

It never ceases to amaze me how you can so confidently make the most asinine assertions without batting an eye...
And qualia are still baloney.

According to you, sensations are "baloney".
No, qualia are baloney. Sensations are quantitative.

So how is their ability to be simulated a given?
Simple: They are quantitative physical processes, of the sort that we simulate all the time, of the sort that we have mathematically established we can always simulate.

Excuse my English, but what the hell are you talking about?
We don't need to know how the brain works to simulate it. Indeed, that would be rather pointless, because we wouldn't learn anything from the simulation.

It is a mathematical and physical fact that the brain can be simulated, regardless of how it works, and that a conscious mind will result from the simulation.

SRIP is crucial to all biological functions and our entire physiology continuously maintains them whether we're conscious or not.
No and no.

First, no, you have completely failed to grasp what self-reference means. (How this is possible given the number of times it has been explained I have no idea.)

You are talking about simple reference - which does happen throughout biological systems, even at the cellular level.

Second, no, we aren't unconscious when we are asleep or even under general anaesthesia. We still have conscious processes going on in our brains until we're brain-dead - until the cells have actually started to die en masse.

There isn't just one consciousness going on inside your brain, but many. You only have access to one of them because you are that one. Split-brain patients have access to two, because the communication that would normally syncronise the separate processes is broken. (Though the access to each is limited to a certain subset of functions, and the disconnect isn't complete.) But there are more even than that.

Constantly repeating that consciousness = SRIP does not make it any more true, and appealing to Hofstadter as if his words are holy writ does not make your position any more sound. Just drop it, dude.
If you could just provide me with one counterexample, that would have a lot more weight than all your comprehensive failures of understanding to date.

As I said, I'll wait.

Oh, noes! I've lost the game
If you were raising the simulated oranges thing - water and electricity in your case - as if it were a coherent objection, you're not even playing the game. You're not even on the same continent that the game is being played.
 
Last edited:
No. Your proposed method doesn't work at all.


One problem is that your N is not physically possible to carry out for any even moderately complicated logic network. Let's take a Z80. Ignoring external memory and peripherals and only considering the programmer-visible registers, it has 2208 possible states.
Sure, but the fact that it has 2208 possible states is irrelevant. Let me go through this again.

You start with A, which is your machine. For example, you have a Z80. Next, you have A run a simulation for 15 simulated seconds--that's the part you're missing here. I don't care how many possible states the Z80 has--it has nothing to do with it. I care about what states the Z80 went through during those 15 simulated seconds.
That doesn't in itself negate the argument, but when dealing with impossiblities you have to be very careful in asserting that something is "counter-intuitive". It would be counter-intuitive for the result not to be counter-intuitive.
I have no clue what you're talking about here. What does "when dealing with impossibilities" refer to?
N isn't any good for that either. N doesn't actually tell you anything about anything.
Not true. If a result in N didn't match the corresponding result in A', the desk check fails. That tells you one of the machines didn't compute something correctly. In the desk check, you even have a map of exactly when and what thing was supposed to be computed, so you could zoom in not only on the NAND gate, but on the approximate time of failure.
N produces consciousness. N produces unconsciousness. N produces everything that can be represented in the system that N is mapping. The problems is that because N produces everything, you can't find anything in there without a map, and that map already contains what you are looking for.
But that's a matter of what you can find. What this doesn't explicitly say is if you believe it's possible that the last 15 seconds of your conscious experience was a result of so many iterations of N. And if so, why. And if not, why not.
The results of N cannot be of any value under any situation.
Why not? What does N lack that makes it have no value under any situation? And is it having "value under any situation" significant?

N strictly speaking does indeed perform every calculation, physically, that A' performs. And A' ran for 15 simulated seconds whatever A ran for its 15 simulated seconds.
The problem is, your exercise is not meaningful.
...and yet you can talk to it:
If a given set of states represents a given interval of consciousness, a superset of those states contains that representation, but unless you have a way of getting just that set out again, the superset doesn't necessarily represent anything meaningful at all.
In the scenario I described, there's a way of getting just that set out again. In a second implied scenario, where you just run N, there's no way of getting just that set out again. The two scenarios are indeed different.

But the point isn't to prove anything. The point is to get you to say something.
This is not restricted to consciousness, of course; it applies to information in general. Your argument doesn't work under any situation.
What argument are you talking about?

Step back and take a big picture look at this. N is different from B, the backwards machine. N with the mapping is different than N without the mapping. N is different than A'. There are distinct things in this scenario all over the place, and they're all distinct for different reasons, in different ways.

That is what this is--it's a way for you point a finger at something. And I've stated all throughout that this is what it's purpose is. The fact that you're treating this as an argument means that you're still missing the point of the exercise.

I'm not trying to prove your position wrong. I'm asking you to clarify your argument by putting your finger somewhere.

You're somewhat doing this implicitly just by going through this, but that's not exactly what I'm looking for. I'd like clear statements. "This is conscious because x. This isn't because y." Or, "this might be conscious, I don't know, because I don't quite have a handle on it yet". Doesn't matter. Just pick something and say it explicitly.
 
Last edited:
Sure, but the fact that it has 2208 possible states is irrelevant. Let me go through this again.

You start with A, which is your machine. For example, you have a Z80. Next, you have A run a simulation for 15 simulated seconds--that's the part you're missing here. I don't care how many possible states the Z80 has--it has nothing to do with it. I care about what states the Z80 went through during those 15 simulated seconds.
Okay, fine. Then please clarify where does this N come from, and what actually is it? If it's not a simple mapping out of possible states, what is it?
 
Or even forget about N, whatever it is, and ask me the question you want to ask, because at this point I have no idea what you're getting at. :confused:
 
Not exactly, because all I'm arguing is that it's counter-intuitive. Again, look at my claim!

Nevertheless, what I'm looking for is still not quite revealed by a claim that N is sufficient... there are a slew of other machines I have in mind that slice even deeper into what the significant factor is. For example, what if it doesn't repeat? If it just does (00)->1, (01)->1, (10)->1, (11)->0, and stops? I can still map to these calculations.

And what if it only does this? (00)->1. I can still map to that calculation (by choosing different "physical layer" mappings).
Just from this, if you do that, the consciousness is in the mapping. You've just moved the caculations about.

I still don't understand what you're trying to get at.
 
Okay, fine. Then please clarify where does this N come from, and what actually is it? If it's not a simple mapping out of possible states, what is it?
It's in a previous post, but I'll go over it again.

A is run for 15 simulated seconds. I can build an equivalent machine, A', out of NAND gates, and run an equivalent program for 15 simulated seconds.

During the run of A', I can record the inputs to each NAND gate, which I have numbered, at each step in the process. I create four ordered partitions of each of these sets--one for each of the four possible inputs of the NAND gate: (00), (01), (10), and (11) (I go in numbered order first, then in time order).

Next, I'm going to verify that the outputs matched by desk checking each NAND gate's computation. I check in this order--I run a check of the first calculation in partition (00), then the first in (01), then the first in (10), then the first in (11), and then proceed to the next in (00), next in (01), and so forth.

So to perform this check, I have to calculate (00), (01), (10), (11), and repeat. If I reach the end of a partition I could fill it with dummy calculations, and proceed in this order.

This particular check I can delegate to a four state machine that calculates (00), then (01), then (10), then (11), then repeats. That's what N is.

N strictly follows the rule that it calculates everything that A' did, and it takes a lot of leeway in doing so. N is built this way specifically to be able to say "no that's not exactly what I mean", or, "yes, that works". That's all.

And I not only fully admit that N isn't analogous to a bunch of other kinds of things, I rely on it--that's its whole purpose, is to set up something that's not analogous, to help refine what sort of things really do count. For example, no calculation in N actually depends on anything else that N does, so there's no interdependencies. (Well, in terms of the machine A' that it's "calculating" the desk checks for).
 
Last edited:
Just from this, if you do that, the consciousness is in the mapping. You've just moved the caculations about.
That works! That's exactly what I'm looking for.

But this leads to another question. I have this mapping, but the mapping is just sitting there. Now do I have to run N to implement consciousness? Or is there a different thing being argued?

(ETA: Oh, and also, does the mapping have to be physically represented? Is it enough that it "exists" in an abstract, mathematical sense of the word? If I have no map and run an N, I don't know how to get useful results out, but would you still say that there's a scrambled consciousness in the calculations?)
I still don't understand what you're trying to get at.
Again, the goal isn't to debate--it's an attempt to, let's say, do a "breadth first" slice through a lot of ambiguity. It's just a groundwork to get a more precise statement out is all.
 
Last edited:

Back
Top Bottom