Explain consciousness to the layman.

Status
Not open for further replies.
So what he's saying is if anyone actually made a ball-catching robot, his argument would be utterly disgraced?

Oh, for heaven's sake. Why can't people pay attention?

A ball-catching robot like this one?

Or this one?

Or this one?

Admittedly the second two are merely juggling, but I think that's a pretty dang academic distinction.

I specifically said that a ball catching robot would be possible. Indeed, I would have been amazed if none had existed. What I said, very clearly, very precisely, was that such robots could not be programmed using the Turing model. I explained in considerable detail the difference between the Turing model and the real time model used to...

Oh, forget it. You caught me out. Jolly well done.
 
This is a bit of a thread hijack, but I feel a certain amount of the preceding material is necessary for the discussion. In a new thread you'd just have similar people hashing out similar things again before the topic got started anyway.
We actually discussed this quite a bit here. In fact, that thread seems to be an appropriate place to move the discussion if you want to hash it out.
A common convention which works 90% of the time is material continuity.
Agree that this doesn't work.
Another convention, my preferred one, is pattern continuity. This certainly has problems, as our minds are stopped
I'm not sure what you mean by pattern continuity here; based on the problem you cite, it seems you're holding a sort of axiom that there should be some "always on" computer, and if it's ever shut off, the pattern ends, and any reboot is a different thing altogether. Given this description, I don't think this works either.
Yet a third, which it seems you switched to when ditching the first, is... I don't know what to call it. But as long as there's still someone in the universe who can legitimately call himself you (by some standard, opinions wildly differ here), what happens to you you is inconsequential.
This one seems to assume that somehow you can be in two places at once, and I don't think it works for that reason.

Given at least my interpretation, I don't like either of the above--no wonder you find them irrational!

I propose a fourth alternative--informational continuity. Specifically, there are certain kinds of things that only I am privy to, and I can develop memories of these sorts of things. In situations where those sorts of memories are genuine, I should have all rights to claim that I remember being that person.

I think this is the critical invariant for continuity, and it dodges all three of your problems. For the Ship of Theseus issue, it really doesn't matter what molecules are in my body--it matters more what the mental states represent, and whether the represented states have a "correct" causal relationship to what they represent. As for the pattern issue, so long as the information got carried, it's not a problem if everything stopped--if you were frozen in carbonite and reconstituted, you still have rights to claim to be the same person.

The cloning part of the infamous teleporter is the "tricky" one, but it's not really too bad if you think about it. This makes two bona fide separate individuals; each should care about the other just as they would their twin. Neither can claim to be the other, because neither is causally linked to the other's subjectivity in the right way. However, both can claim to have been the original, because they were both causally related in the appropriate way to their past counterparts.
 
The two are not mutually exclusive.

No, the one implicitly includes the possibility of the other. (Though the fact that something is not explained does not mean that all explanations are possible).

Could you please link to that detailed analysis ?

I could. Maybe I even will.


1) "Relativity is not correct" is also a negative. Hopefully you see why someone who makes that claim would have to do more than just claim it.

2) You are simply stating that it's the default position but are not giving me a reason to believe you.

The default position for anything is that it is not understood, until it is understood. It's not actually possible to demonstrate that something isn't understood. It's only possible to refute that claim by demonstrating understanding.


3) When evidence IS presented, you ignore it.

What evidence is this? I've been presented with several scenarios of the "what if a computer passed all the tests for consciousness, huh?" type. Not a lot of evidence.

Self-referential information processing. ;) I know you love that one.
 
I took him to mean that it can't be done using real-to-us 24 hr/day 3600 sec/hr time.

What I'm claiming is that a robot programmed using the Turing model could not perform a real-time action like catching a ball because a Turing model implies a closed system, and catching a ball necessitates an open system, which is capable of response to external events.

A program to catch a ball according to the Turing model would have to include the ball as part of the model - which is clearly not the way to proceed.

And no, relativity considerations do not alleviate that problem.
 
The cloning part of the infamous teleporter is the "tricky" one, but it's not really too bad if you think about it. This makes two bona fide separate individuals; each should care about the other just as they would their twin. Neither can claim to be the other, because neither is causally linked to the other's subjectivity in the right way. However, both can claim to have been the original, because they were both causally related in the appropriate way to their past counterparts.

That seems a reasonable way to look at it, but I'm sure there will be plenty of people claiming that if you are rational, knowing that an exact duplicate exists means you shouldn't mind being vapourised.
 
What I'm claiming is that a robot programmed using the Turing model could not perform a real-time action like catching a ball because a Turing model implies a closed system, and catching a ball necessitates an open system, which is capable of response to external events.
Why can't a Turing-based system have sensors to detect things outside of its memory and model?

And, what does "real-time" performance have to do with consciousness? It does not matter how fast or slow the algorithm runs, it would still be a consciousness process.

An algoritm to compress a file would still compress a file whether it ran slowly or quickly. An evolutionary algorithm would still produce an optimal result of some sort, if it ran slowly or quickly. Why would consciousness need to be an exception? Granted, reaction time to sensory perception would be impacted. But, aside from that: Why would a poorly performing consciousness system not be conscious?
 
The cloning part of the infamous teleporter is the "tricky" one, but it's not really too bad if you think about it. This makes two bona fide separate individuals; each should care about the other just as they would their twin. Neither can claim to be the other, because neither is causally linked to the other's subjectivity in the right way. However, both can claim to have been the original, because they were both causally related in the appropriate way to their past counterparts.
Yes.

Personal identity is something we assign pragmatically based on a broad range of indicators; that it falls down when we consider physical impossibilities is not much of a bug, if it's a bug at all.

If someone claims to be your sister Fred, and that person looks like Fred, sounds like Fred, just came out of Fred's bedroom wearing Fred's pyjamas, has the same bruise over her left eye that Fred incurred in last night's karuta tournament, calls the cat Mr Waffleboots just as Fred does and complains that you've run out of Fred's favourite breakfast cereal.... Then there's a fair likelihood that this is Fred, particularly given that teleporters, instant cloning machines, human-indistinguishable cyborgs or androids, solidograms and whatnot don't actually exist.
 
Agreed. Got a link to a conscious robot?

If it were conscious would it still be a robot?

And how would we know if it were conscious?

This is a frustration I have with discussions about consciousness. We can't know whether the robot is conscious. We can't just ask it - some clever programmer could have taught it to waffle, to muse, to simulate numerous external manifestations of consciousness.

I'm quite sympathetic to the non-computational camp. I don't have the background to make a convincing argument. There may not be one. But I can come up with examples that appear to me to be non-computational.

To throw into relief the aspects of "mind" that may be non-material, it seems we need to keep chipping away at the material. What was inexplicable to Plato isn't inexplicable to us, so three cheers for empiricism. I know who's doing the heavy lifting here, and IMO ... it's not the philosophers, or the mystics, or the theorists dwelling on the so-called "hard problem."
 
Last edited:
What I'm claiming is that a robot programmed using the Turing model could not perform a real-time action like catching a ball because a Turing model implies a closed system

]snip]

A program to catch a ball according to the Turing model would have to include the ball as part of the model - which is clearly not the way to proceed.
You're consistently confusing the model with its application. Your argument is incoherent.
 
That seems a reasonable way to look at it, but I'm sure there will be plenty of people claiming that if you are rational, knowing that an exact duplicate exists means you shouldn't mind being vapourised.
Yes, people do claim that. Don't know why; I'd rather not be vapourised no matter what the conditions.
 
If it were conscious would it still be a robot?

And how would we know if it were conscious?

This is a frustration I have with discussions about consciousness. We can't know whether the robot is conscious. We can't just ask it - some clever programmer could have taught it to waffle, to muse, to simulate numerous external manifestations of consciousness.
How is that different to what people do?

I'm quite sympathetic to the non-computational camp. I don't have the background to make a convincing argument. There may not be one. But I can come up with examples that appear to me to be non-computational.
Yes, please do.

To throw into relief the aspects of "mind" that may be non-material, it seems we need to keep chipping away at the material. What was inexplicable to Plato isn't inexplicable to us, so three cheers for empiricism. I know who's doing the heavy lifting here, and IMO ... it's not the philosophers, or the mystics, or the theorists dwelling on the so-called "hard problem."
Quite. Plato of course had an excuse - he didn't have the work of Plato to build on...
 
Two very interesting research articles on subjective experience.

In the Churchland article he coaches the reader on how to see impossible colors. The article is a PDF is from a print article. Does anyone here know if the paper/ink experience can be applied directly to the PDF/computer screen experience?

I tried fatiguing my cones but I'm not sure it worked - I couldn't see the blue that is just as dark as black but still retains its blueness.

Be patient with me, I am the proverbial "layman."
 
I tried fatiguing my cones but I'm not sure it worked - I couldn't see the blue that is just as dark as black but still retains its blueness.
Is it the cones that fatigue though, or the opponent process pathways? It would be nice to have an accurate mathematical model of afterimages to apply (my guess is that it would be the opponent processes that should receive focus).
 
How is that different to what people do?

The point is, I wouldn't be able to tell if the robot was doing something different than people do. All I could do is observe the robot's behavior. I wouldn't know if it had been programmed to go through its memory and "act" conscious or if the response had arisen spontaneously. For lack of a better word, "spontaneous" will do; consciousness is the spontaneous process of knitting our internal narrative as we go along.

Yes, please do.

OK. I may be conflating "material" and "computational." Be kind.

When we have an emotion there is a neurochemical correlate. What comes first - the emotion or the neurochemical correlate?

Hormones squirting out beyond my conscious control can make me feel happy. I smile. But, darn it, what if I'm feeling yucky and frowning? Per my pop-psych understanding, if I start smiling, I'll feel better.

So there are three things going on - a hormone is squirting, I'm contorting my facial muscles and I'm having a subjective experience of how I'm feeling. The volitional element can change what's happening "inside." That seems non-computational.

Swinging on a swing or riding in an elevator I can experience something like weightlessness - a distinct physical sensation. Falling in love has produced the same sensation (it's been a while). This could be unique to me, I'm not sure. Anyway, it feels identical. Yet I'm not actually in free fall. I can't point to my sensation and say look, here's evidence that emotions can produce a sense of weightlessness; but, if this turns out to be a sensation others have had, I'd be curious as to whether there's an explanation.
 
Is it the cones that fatigue though, or the opponent process pathways? It would be nice to have an accurate mathematical model of afterimages to apply (my guess is that it would be the opponent processes that should receive focus).

I'll have to read the article again - but per my scandalously poor understanding of color, light and ink behave differently. I didn't know if all the cone/opponent process pathways fatigue could be replicated exactly on a computer. Vision is a mystery to me - if this topic were in the science thread I would be lurking but hell, theories of consciousness? Hum a few bars and I can fake it ...
 
Why can't a Turing-based system have sensors to detect things outside of its memory and model?

Because then it wouldn't be a Turing-type system. It would use a different model.

And, what does "real-time" performance have to do with consciousness? It does not matter how fast or slow the algorithm runs, it would still be a consciousness process.

An algoritm to compress a file would still compress a file whether it ran slowly or quickly. An evolutionary algorithm would still produce an optimal result of some sort, if it ran slowly or quickly. Why would consciousness need to be an exception? Granted, reaction time to sensory perception would be impacted. But, aside from that: Why would a poorly performing consciousness system not be conscious?

Well, if consciousness is based on a Turing-type computation, then it would not be time dependent, and hence the same computation run at one thousandth or a thousand times as fast would give the exact same experience. That of course implies that all the interactions with the world be simulated - or else the world would appear to pass by at the wrong speed.

However, I'm claiming that it's at least possible that the way the brain works is dependent on time, and its interaction with the environment - the environment including the bodily functions under nervous control, voluntary and involuntary. I don't think that the closed, computational Turing model is necessarily the best to apply to the brain. If the Turing model is not used, and instead a time dependent model is applied, then clearly the mind cannot be simulated as a brain-in-a-box.

This appears plausible to me because obviously, if the brain didn't concern itself with interaction with the environment, then it wouldn't work at all. All the actions of the brain are highly time dependent. It seems that we should at least consider the possibility that the right model for the activity of the brain is similarly time-dependent. If that's the case, we cannot assume that the same "program" (if that is the correct term) will work at a different speed.

I've written programs that will give the same correct output regardless of how fast or slow they run, and other programs where failure to carry out operations within a certain time interval results in the program not achieving what it's supposed to do. Programs that are time dependent are different from programs that aren't.
 
The point is, I wouldn't be able to tell if the robot was doing something different than people do. All I could do is observe the robot's behavior. I wouldn't know if it had been programmed to go through its memory and "act" conscious or if the response had arisen spontaneously. For lack of a better word, "spontaneous" will do; consciousness is the spontaneous process of knitting our internal narrative as we go along.
Sure. And if you honestly can't tell whether the robot is doing one or the other?

OK. I may be conflating "material" and "computational." Be kind.
No worries.

When we have an emotion there is a neurochemical correlate. What comes first - the emotion or the neurochemical correlate?
Neither - the neurochemical correlate is part of the physical process of the emotion.

Hormones squirting out beyond my conscious control can make me feel happy. I smile. But, darn it, what if I'm feeling yucky and frowning? Per my pop-psych understanding, if I start smiling, I'll feel better.
Yes, that's my understanding too. The brain isn't always good at distinguiishing correlation from causation.

So there are three things going on - a hormone is squirting, I'm contorting my facial muscles and I'm having a subjective experience of how I'm feeling. The volitional element can change what's happening "inside." That seems non-computational.
No - the fact that something is a computational process doesn't mean that there are no inputs or outputs or feedback loops - indeed, those are essential.

Swinging on a swing or riding in an elevator I can experience something like weightlessness - a distinct physical sensation. Falling in love has produced the same sensation (it's been a while). This could be unique to me, I'm not sure. Anyway, it feels identical. Yet I'm not actually in free fall. I can't point to my sensation and say look, here's evidence that emotions can produce a sense of weightlessness; but, if this turns out to be a sensation others have had, I'd be curious as to whether there's an explanation.
Well, the broad explanation is that it's your brain telling you this. ;)

Possibly someone has something more specific to offer.
 
Because then it wouldn't be a Turing-type system. It would use a different model.
Different how?

Please show the mathematics. Don't just assert things.

Well, if consciousness is based on a Turing-type computation, then it would not be time dependent, and hence the same computation run at one thousandth or a thousand times as fast would give the exact same experience.
Certainly.

That of course implies that all the interactions with the world be simulated - or else the world would appear to pass by at the wrong speed.
Of course, because otherwise the computation would be different.

However, I'm claiming that it's at least possible that the way the brain works is dependent on time, and its interaction with the environment - the environment including the bodily functions under nervous control, voluntary and involuntary.
Sure.

I don't think that the closed, computational Turing model is necessarily the best to apply to the brain.
Why not?

If the Turing model is not used, and instead a time dependent model is applied
How is this "time dependent model" actually different?

Show the mathematics.

This appears plausible to me because obviously, if the brain didn't concern itself with interaction with the environment, then it wouldn't work at all.
And if there were no data on the tape of a Turing machine, it wouldn't do anything.

This is not an argument.

All the actions of the brain are highly time dependent.
Sure. How does that conflict with the computational model?

It seems that we should at least consider the possibility that the right model for the activity of the brain is similarly time-dependent.
How does that conflict with the computational model?

If that's the case, we cannot assume that the same "program" (if that is the correct term) will work at a different speed.
Sure we can. Speed everything up equally, you'll get the same result.

If you change things, well, you've changed things.

This is not an argument, Westprog. This is just you being confused.

I've written programs that will give the same correct output regardless of how fast or slow they run, and other programs where failure to carry out operations within a certain time interval results in the program not achieving what it's supposed to do. Programs that are time dependent are different from programs that aren't.
Nope.
 
Status
Not open for further replies.

Back
Top Bottom