On Consciousness

Is consciousness physical or metaphysical?


  • Total voters
    94
  • Poll closed .
Status
Not open for further replies.
That is the assessment of a doctoral researcher in the field of biology. If it has not been fully examined, who the hell are you to say it has? Not sure myself either way.

Decide for yourself, here is his paper: http://www3.surrey.ac.uk/qe/pdfs/cemi_theory_paper.pdf

Oh yeah, did you not read my special note about coming back to this topic after a while? Guess you could not wait.
TFP said:
Introduction
The binding problem of consciousness
Well, I've read enough.

Everything written after invoking "the binding problem" is almost certainly nothing more than a screen of buzzwords trying to sneak some BS in under the radar. That this is his opening line - yeesh.
 
Sure an abstraction of a rainstorm into numbers and an abstraction of consciousness into numbers can map to each other. There is no theoretical reason they cannot.

Oh my god, please not this again. We have been over this a hundred times in other threads.

!Kaggen, please try to listen to why this is wrong, and make a genuine effort to understand.

You can map a state of one system to a state of any other system. This is true.

Rainstorm( t1 ) --> map1 -->Brain(t1) where t1 means timeslice 1.

But that doesn't imply this:

Rainstorm( t2 ) --> map1 -->Brain(t2).

In fact, it is pretty much impossible to find such a mapping. If you did, then the rainstorm would be ... wait for it ... conscious just like you.

Instead, what happens is this:

Rainstorm( t2 ) -->map2 -->Brain( t2 )

map1 and map2 are different. THEY ARE DIFFERENT. So the fact that you can find maps between states is irrelevant. Lanier is wrong, plain and simple.

If there was a computer program that did this:

For all states x, Rainstorm( x )-->Brain( x )

... which is what Lanier is suggesting, it would be using a different "map" for each state. And it isn't a "map" if the map is different every time.

We had a huge discussion on isomorphism in an earlier thread and this is exactly what that was about.
 
Not sure how your comprehension works but he is doing the exact opposite of what you say. He is not ignoring the map he is taking the map very seriously. He is showing how, if consciousness was a map, an abstraction, then it could be mapped from any medium including a rainstorm. A medium that shows no meaningful conscious behavior. The point being a map is meaningless, it is an abstraction, until a human puts it into a meaningful context. Consciousness as map makes this much more obvious.

No that isn't what Lanier is saying.

He is saying that if consciousness is a class of behaviors that maps to the behavior of the human brain, then a rainstorm can be conscious. And we are supposed to think that is absurd, thus consciousness must be more than merely a class of behaviors that can be mapped to the behavior of the human brain.

That's why he is wrong -- you can map a rainstorm to a human brain. Like pixy says -- so what. Anyone can map any state of anything to any state of anything else.

What you cannot do is map the behavior of a rainstorm to the behavior of the human brain. That's because "behavior" isn't a single state, it is a series of states. You cannot map every state of something to every state of something else. You can get pretty close, in which case those two things have isomorphic behavior, which is essentially identical behavior as far as physics and reality is concerned.
 
Last edited:
In particular, this is very wrong:

Jaron Lanier said:
Enough! I hope the reader can see that my game can be played ad infinitum. I can always make up a new kind of sensor from the supply store that will give me data from some part of the physical universe that is related to itself in the same way that your neurons are related to each other by a given AI proponent

Suppose there are two systems, each consisting of two balls. System 1 is ball 1a and 1b, system 2 is ball 2a and 2b.

Lanier claims that no matter what balls 1a and 1b do, we can find a sensor that maps that behavior to balls 2a and 2b.

Here is a trivial example of where that breaks down: If balls 1a and 1b influence each other, presumably by getting within some threshold of proximity, but balls 2a and 2b don't influence each other, it is literally impossible to find a mathematical isomorphism with the behavior of balls 2a and 2b.

Or another example: If balls 1a and 1b remain stationary, while balls 2a and 2b are moving in any direction other than parallel to each other -- again, no possible isomorphism.

No such sensors exist.

What Lanier is probably trying to say, since he can't be so stupid as to actually think the above examples are wrong, is that his "sensors" might need to be incredibly complex computers themselves. Something like that could map the behavior of two stationary balls to two colliding balls, or whatever we could imagine.

But that makes his argument irrelevant -- concluding that we could map the behavior of a rainstorm to the behavior of your brain, using a set of computerized sensors each of which are orders of magnitude more complex than your entire brain in the first place, and furthermore would need to communicate with each other in order to successfully perform the mapping ( or, you could view it as one gigantic sensor ), smacks of stupidity that is worthy of some world record.
 
I think his argument is more or less what you posted after "What Lanier is probably trying to say": that given any system's behavior, there is a computer program that maps it to any other system's behavior. In the case of the rainstorm and the conscious brain, it's going to be one helluvan awful looking program, perhaps treating various subspaces of the rainstorm as '1' if they have a raindrop in them and '0' if they don't except for about a gazillion exceptions which change from instant to instant to get the damn thing to spit out the same bits that the brain sensor is, but it could be done (though thank turing only ever as a thought experiment).

It's kind of a wonky example, but I guess his point is that given the right software for the job, everything maps to consciousness (so maybe the computational consciousness proponent needs to specify conditions that would disqualify ad hoc, kludgy mappings from rainstorms but not from conscious brains? -- though I'm not sure of that; it seems to me all his argument shows is that it's possible to write a program to translate any system's behavior to the data produced by a conscious system - well, so what? that only shows that the data produced by a conscious system - (potentially understood as instruction code in the context of its original, embodied system? is that the hang-up?) - is a certain subset of systemic data; other systems may accidentally or with great difficulty be induced to produce it, but then the data means something entirely different in the context of those systems: "1001" will mean one thing in the context of a specific brain, another in the context of a rainstorm, another in the context of a ham & cheese omelette, etc.; unless I'm overlooking something - always possible - I'm kind of at a loss to see the relevance of his counterexample, entertaining as it may be).
 
Last edited:
Yes, his argument is flawlessly self-defeating.

If you can write a computer program that maps consciousness onto a rainstorm (and let's posit that this is so), and rainstorms clearly don't possess consciousness, then of necessity the conscious behaviour is in the computer program.

We are indeed gadgets.
 
Of course, it is possible to make a dynamic map between a rainstorm and a conscious mind, but that makes the rainstorm just a red herring. It would be easier to make a dynamic map between a wicker chair and a conscious mind. The fact that the wicker chair isn't conscious is meaningless. All the interesting stuff happens in the map.
 
Have you encountered any conscious computers?
No, but I have encountered chess-playing computers, which was the point. I did specify "while not intelligent" for the chess computers, but you read what you wanted to read.
 
No, but I have encountered chess-playing computers, which was the point. I did specify "while not intelligent" for the chess computers, but you read what you wanted to read.

Not really, the discussion is about consciousness not chess.
You used the analogy that a chess playing computer exists whilst a chess playing rainstorm doesn't .
I pointed out that a conscious computer does not exist either so your point was irrelevant.
 
Not really, the discussion is about consciousness not chess.
You used the analogy that a chess playing computer exists whilst a chess playing rainstorm doesn't .
I pointed out that a conscious computer does not exist either so your point was irrelevant.

The point is very relevant, but not in the way you read it.

Lanier says basically: rainstorms aren't conscious, and if we can map rainstorms onto computer programs, we conclude that computers can't be conscious either.

steenkh says: rainstorms can't play chess, and if we can map rainstorms onto computer programs, we conclude computers can't play chess either.

Except computer can play chess. Therefore, the assumption that we can map rainstorms onto computer programs is false. And if that assumption is false, we can't conclude anything about consciousness on computers using the rainstorm analogy.

ETA: note that it doesn't say anything about the possibility of conscious computers. The only point that's being made here is that Lanier's analogy is flawed and useless.
 
Last edited:
I note that you once again refuse to answer the simple question I posed. I'm at a loss as to why.

Do you or do you not expect the predictions made by the laws of physics to hold tomorrow?

Anyway, the scientific method is about the future, as well as the past (and the present). That the evidence it uses to make predictions about the future comes from the past isn't particularly interesting.

There is only one future that you and I will interact with. That is the future whose behavior we are interested in. And that is the future in which the predictions of science will be tested.
Do you expect them to fail that test? Do you think there is reason to believe that they will not fail that test?

You're not making sense. Here's a predictions about the future: if you drop a bowling ball tomorrow it will fall with an acceleration of 9.81 m/s2.
That's not a prediction about the past: I'm literally talking about tomorrow.

It's true, but not very interesting, that you can't find out if the prediction is true until tomorrow, but that's true of any prediction.

The issue is not about whether an abstract prediction will fail, but about its accuracy and its relationship with the future.

The accuracy of an abstract prediction is not dependent on the future, but on data from the past.

I disagree about it not being interesting. It means that our physical interaction with the world, such as yoga for example, is our closest interaction with the future, not an abstraction/thought which has to do with the past. The result being that our biology has an as yet an undiscovered ability to interact with the future with sufficient accuracy to survive.
 
The issue is not about whether an abstract prediction will fail, but about its accuracy and its relationship with the future.

The accuracy of an abstract prediction is not dependent on the future, but on data from the past.
That's absurd.
 
The issue is not about whether an abstract prediction will fail, but about its accuracy and its relationship with the future.
An abstract prediction that fails will have zero accuracy and relationship with the future.
 
The result being that our biology has an as yet an undiscovered ability to interact with the future with sufficient accuracy to survive.

lol, "undiscovered" according to who? You?

You should really get caught up on, oh, I dunno, the last 300 years of the biological sciences.

EDIT -- you know what, never mind. Why don't you just provide an example of "interacting with the future with a sufficient accuracy to survive" and you can ask me how it works. That will save both of us the trouble. It will save you the trouble of having to actually learn something -- god forbid -- and it will save me the trouble of having to wade through all the misinformation you acquire as you learn from all the wrong sources.
 
Last edited:
Of course, it is possible to make a dynamic map between a rainstorm and a conscious mind, but that makes the rainstorm just a red herring. It would be easier to make a dynamic map between a wicker chair and a conscious mind. The fact that the wicker chair isn't conscious is meaningless. All the interesting stuff happens in the map.

We can take it further and just say that the easiest thing to map to a conscious mind is a collection of n particles that share the same world position and behavior.

Then the dynamic map turns out to be .... a conscious mind. An *exact* one, to be precise.

Mathematically, a human brain is merely a dynamic mapping of a whole bunch of particles from the same bunch of particles all squished together at <0,0,0 .... 0>.
 
Except computer can play chess. Therefore, the assumption that we can map rainstorms onto computer programs is false.

To be formally correct, it doesn't invalidate the assumption that we can map rainstorms onto computer programs. It invalidates the notion that you can conclude anything at all about the rainstorm based on the possible existence of such a mapping.

I agree that it makes the analogy flawed and useless.
 
To be formally correct, it doesn't invalidate the assumption that we can map rainstorms onto computer programs. It invalidates the notion that you can conclude anything at all about the rainstorm based on the possible existence of such a mapping.

True, I was assuming a static map, which isn't possible, but would allow some conclusions if it were.

A dynamic map is possible, but since a dynamic map is always possible, no matter what two systems you're using, it doesn't allow you to draw any conclusions.
 
lol, "undiscovered" according to who? You?

You should really get caught up on, oh, I dunno, the last 300 years of the biological sciences.

EDIT -- you know what, never mind. Why don't you just provide an example of "interacting with the future with a sufficient accuracy to survive" and you can ask me how it works. That will save both of us the trouble. It will save you the trouble of having to actually learn something -- god forbid -- and it will save me the trouble of having to wade through all the misinformation you acquire as you learn from all the wrong sources.

The ability to use abstract information from our environments to predict outcomes is a recent occurrence in animal biological history and has yet to prove as successful as the survival skills of animals who lack this ability. All human physical skills fall into this category as they do not require this ability. No matter how much maths you do on the subject or tennis you watch on tv you won't be a champion tennis player without actually playing tennis. We are far from understanding and predicting how to produce a champion tennis player out of a computer.
 
Status
Not open for further replies.

Back
Top Bottom