• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Has consciousness been fully explained?

Status
Not open for further replies.
Ok, why does this discussion get so personal? I get it with politics, but we're talking about different conceptions of consciousness.
 
Easy question. Easy answer. Just pose the question and wait for 113 pages of word salad to be posted. Then scan page 114. If it's still word salad, then the answer is a big fat NO.

On a related note; the causality paradox. Every effect seems to have a cause, which in turn has another cause. However, it seems that either we must arrive at an uncaused cause, or an infinite regression of causes stretching backward through time. Which is more likely? Discuss.
 
If you have a formal education in math, you should sue your college:

Why Malerin is Wrong About Bayes Theorem

His physics isn't too hot either. I had to explain relativity in detail (with references, of course) in the "Are You Conscious" and "Star Trek Transporter" threads. He wouldn't have it.

This wouldn't be worth reiterating except that the constant questioning of the beliefs, education and qualifications of the people who have the temerity to disagree with RD is part of his modus operandi. It would be irrelevant in any case, since someone's arguments should stand on their own merits, unless they chose to support them with an assertion of professional competence - but given that RD has failed to demonstrate any outstanding competence in any of the fields of which he's demanding expertise from others, there's particular irony.
 
Ok, why does this discussion get so personal? I get it with politics, but we're talking about different conceptions of consciousness.

I've found it necessary to drop some posters. However, dlorde and the Wasp seem able to discuss these matters without accusations of dishonesty or personal abuse, so it is possible.
 
On a related note; the causality paradox. Every effect seems to have a cause, which in turn has another cause. However, it seems that either we must arrive at an uncaused cause, or an infinite regression of causes stretching backward through time. Which is more likely? Discuss.

Uncaused events are part of the Standard Theory, and have been for many years.
 
I was going to forego the foregone conclusion for a little longer, and I must say that I'm not quite sure it's fair to call it word salad - at least not all of it. Some of these guys know more about epistemology and biology and psychology and cybernetics than I ever did or will, and argue their points with vigor, authority and precision. That is what makes the utter hopelessness of agreement so entertaining.

It's possible to learn a lot from discussions like this. In particular, it's possible to learn precisely what one thinks to be true oneself. The only way to find out for sure is to argue for it.
 
It really doesn't make any difference, in principle. Whether or not animals are conscious - and if they are, which animals - doesn't change anything essential about the discussion. Clearly the mechanics of possible animal consciousness would be similar to that of human consciousness anyway.

But that's the whole point.

If animals are conscious but at a lower "level" than humans, then it's not entirely possible that other things (computers) could be considered conscious as well. This is why I said what I said: it's important to know what we mean by "conscious" in discussions like this, and stick to it.
 
Alright, now its clear that you're just being contrary for contrary's sake.

And only because I disagree with you. :rolleyes:

Tell me, in your experience, is the sensation of "cold" identical to, say, the flavor of "bitter"?

Of course not. Please read what I wrote again.

Last I checked, I couldn't 'easily see' the sensation of nausea either, but its still a quale.

I shouldn't have expected you to understand. You can FEEL nausea easily enough, however. But what about "square root" ?

At least in my experience, abstract concepts are usually manifested in my mind as some learned symbol(s) which encapsulates some sense(s) of the concept(s) I've been trained to associate them with.

That's not a half-bad answer at all. But although we see the square root symbol, it doesn't really tell us how we can understand such an abstract concept as mathematics with qualia.

Its like organizing subjective tally marks, or beads. It doesn't matter which quale or combinations of qualia one employs,; so long as they are ordered and manipulated property in one's awareness they will suffice.

How about trillions ? We can certainly understand the concept but there is no way in hell we can relate that to real-life experiences. Nobody can imagine trillions of cookies, for instance. So how do you figure that relates to qualia ?

AMM: "'Numbers' aren't a postulated hypothetical but a categorical label of for an indisputable given--"

Belz...:"Hmph! Your say-so doesn't make it true x-P "

:rolleyes:

Wow. I see what you did there! You replaced one word with another and thought it applies. Let's see if I can point out why that's flawed:

Hypothetical conversation:

Some theist said:
Me said:
Some theist said:
God exists

Your say-so doesn't make it true.

Me: Oranges exist

You: Your say so doesn't make it true.

Add roll-eye smilies and other forms of sarcasm. Doesn't change my original point about god though.
 
But that's the whole point.

If animals are conscious but at a lower "level" than humans, then it's not entirely possible that other things (computers) could be considered conscious as well. This is why I said what I said: it's important to know what we mean by "conscious" in discussions like this, and stick to it.

Well, we can come up with all kinds of definitions of consciousness - SRIP, when someone behaves like this, something that happens when neurons fire - but there isn't going to be any consensus on a definition of consciousness, and there isn't going to be any agreement on whether animals are conscious.
 
I have no idea wtf a BF quine self-interpreter is.
I find it very surprising that you say you don't know what a BF quine or BF self-interpreter are, especially I understand you are a programmer. In case it really is news to you, brainf*** (masked here but quite often abbreviated to BF) is a simple yet Turing complete esoteric programming language (including some variations). I've mentioned it and the quine and self-interpreter in a number of other posts both in this and in the washing machine thread so thought you would have picked up on at least one of those earlier posts.

Google brainf***.
Google quine. (A term apparently coined by Hofstadter in GEB according to Wikipedia.)
Google self-interpreter.

I think in terms of systems of particles.

Suppose you have three systems of particles A, B, and C. Further suppose that C can be anything in the rest of the universe if need be, it doesn't affect the argument.

Suppose the behavior of A is dependent upon the behavior of B such that when B is in a certain subset of states the state of A converges to state a1 and when B is in another subset of states the state of A converges to state a2. In other words

State of B is in { 1, 2, 3, 4, .... n } == A converges to state a1
State of B is in { n + 1, n + 2, ..... m } == A converges to state a2
(for the sake of convenience the states are merely labeled as integers above )

Further suppose that B is in a certain configuration that allows it to "interface" with other systems -- maybe like an enzyme interfaces with molecules that may or may not be one of the substrates they catalyze, whatever. And suppose that when B interfaces with a certain set of systems -- any one of which can be called C, above -- B is put in one of the states belonging to the first set above, { 1, 2, 3, 4, .... n }.

Finally, suppose that B is put in a state belonging to the second set above, { n + 1, n+2 .... m } IF AND ONLY IF B interfaces with A.

What does this mean? It means A will ONLY ever be in a state that converges to a2 if B is interfacing with A itself.

That is the fundamental idea of self reference. In this situation B is the "reference" and it can be referencing "self" or "non-self" from the point of view of system A. Of course A doesn't think "self" because it can be a very simple system that doesn't think at all -- it just behaves differently when B is interfacing with itself vs. anything else.

Note that there MUST be the second set of behaviors -- when B is NOT referencing self -- for self reference to apply. There is no such thing a self when there is no non-self. Both must exist.

Please ask if you have any questions.
Thanks for trying but what were you on when you wrote this? Or perhaps it was meant to be a joke of some kind? (I must admit I laughed on my first reading.)

If meant seriously, I don't think you were having a good day because it's garbled, incomplete and confusing. We have "systems of particles", states, things "interfacing", dependencies, "==" and then suddenly a self and non-self pops up also. Whatever, yep. Quite possibly it made sense to you at the time but if so I think you kept too much in your own head for what you wrote to be of much use to anybody else. Sorry if this seems too harsh.

Were you trying to say that B (while in one of its second set of states) is "self-referencing" itself (but via A in state a2), or that A (in state a2) is "self-referencing" itself via B, or something else altogether - perhaps a1 means "non-self" and a2 means "self"? So far as I can make out you seem to have not done much more than (possibly) set up some kind of positive feedback loop between A and B (whilst they are in the appropriate states). How many other systems can a particular system "interface with" at any one time? What control's which of the possible state changes has precedence if the answer is more than one?

In any case, what I was actually hoping for was a simple, clean programming example to demonstrate SRIP in the form that you and Pixy claim is logically equivalent to consciousness (and therefore also has some kind of subjective experience while running even if that might be very limited and not necessarily at all like anything I experience).
 
Last edited:
In any case, what I was actually hoping for was a simple, clean programming example to demonstrate SRIP in the form that you and Pixy claim is logically equivalent to consciousness (and therefore also has some kind of subjective experience while running even if that might be very limited and not necessarily at all like anything I experience).

I've been waiting a long time for a precise definition of SRIP. I hope the next response won't be "this effect only occurs in examples too complicated to post here".
 
dlorde said:
Yes, for the definition of consciousness I used in my last post.

We can recognize consciousness through behavior which indicates the ability to differentiate abstractly between subject and object.
OK, that's a start.

Behaviourally an ant can differentiate between self and other, and can distinguish various kinds of 'other' and treat them accordingly; some level of abstraction is necessary for this behaviour. Is this differentiating abstractly between subject and object? If not, can you suggest a simple example?

The behavior you described does not indicate in any way that ants have any ability to form an abstract representation which describes a subject object relationship. Since we only know of the abstract that someone uses to differentiate subject from object through symbols this is the minimum behavioral requirement. An example would be the written language.


dlorde said:
Memory seems to play an important role in this ability.
The ability to become aware of a continuity of my relationship with an object even when its no longer experienced.
Children generally develop a sense of object persistence at around 8-12 months.
Just to be clear what do you mean by "object persistance"

dlorde said:
When children start claiming objects as theirs not just when they are experiencing it, but when someone else might.
Competition for favoured items is generally present by around 18 months...
I would be interested in your reference for this please.

dlorde said:
There are many stages of cognitive and social development in the first two years; at which stage would you suggest consciousness first appears?
It is a gradual process to a full realization when a child uses "I" for the first time. "I" is the only concept which we cannot learn to use as such through copying adults. We learn to use it when we become fully conscious of our own thoughts. The "I" in a way is synonymous with "consciousness".
 
I'll just tell you then:

Hilary Putnam




Wait, there's more:




Putnam's Brain in a Vat thought experiment is what the post you derided is largely based on.

Please try to have a little fun RD. I am.

Ah, I see.

Also, that argument is complete and utter nonsense.

Take, for example, any reconstruction of premise 1:

"If I am a brain in a vat, then it is not true that if my word for X refers to something, it refers to X."

What does that even mean? If you were a brain in a vat, your word for trees would refer to simulated trees. The existence of trees in the outside world are irrelevant.

In fact, Putnam's premise that the vat complex pre-existed, and "nobody" programmed in any relationship between real trees and simulated trees, invalidates his whole argument. How could information limited to the vat refer to anything outside the vat if nobody programmed such a link in?

Utter nonsense. Of course, so is all the rest of anti-computationalism.
 
If meant seriously, I don't think you were having a good day because it's garbled, incomplete and confusing. We have "systems of particles", states, things "interfacing", dependencies, "==" and then suddenly a self and non-self pops up also.

It is meant seriously.

system of particles == any set of particles. Sorry, I guess I should have said "set" but I didn't want to use the word "set" more than I needed to.

state == specific arrangement. All sets of particles have an infinite set of states(notwithstanding the possibly discrete nature of the universe). I must add that if you don't know what the "state of a system" means, then you won't be able to understand any of this anyway.

interface == direct causal dependency, as opposed to indirect.

dependency == when X is observed iff Y is observed, then X and Y are dependent. If X is observed every time Y is observed, but not vice versa, then X is dependent on Y but not vice versa. We call this causation, which is just a term for causal dependency.

self and not-self are then defined according to what I wrote.

What don't you understand? I think it was pretty clear. If A behaves a certain way when B interfaces with A, vs. when B interfaces with something other than A, then there is self reference and not-self reference, respectively.
 
In any case, what I was actually hoping for was a simple, clean programming example to demonstrate SRIP in the form that you and Pixy claim is logically equivalent to consciousness (and therefore also has some kind of subjective experience while running even if that might be very limited and not necessarily at all like anything I experience).

Programming is a little misleading because there is all sorts of stuff that is done by the runtime environment, but whatever.


class A {
A someReference;

void think () {
if( &someReference == this ) {
do something <this stuff in here is self referential behavior>
} else {
do something else
}

}
 
Last edited:
rocketdodger said:
I'll just tell you then:

Hilary Putnam




Wait, there's more:




Putnam's Brain in a Vat thought experiment is what the post you derided is largely based on.

Please try to have a little fun RD. I am.

Ah, I see.

Also, that argument is complete and utter nonsense.

Take, for example, any reconstruction of premise 1:

"If I am a brain in a vat, then it is not true that if my word for X refers to something, it refers to X."

What does that even mean? If you were a brain in a vat, your word for trees would refer to simulated trees. The existence of trees in the outside world are irrelevant.

In fact, Putnam's premise that the vat complex pre-existed, and "nobody" programmed in any relationship between real trees and simulated trees, invalidates his whole argument. How could information limited to the vat refer to anything outside the vat if nobody programmed such a link in?

Utter nonsense. Of course, so is all the rest of anti-computationalism.


What I have bolded in your quote RD is part of the point Putnam was making here.

Let's check it again:

A*. If I am a BIV, then it is not the case that if my word ‘tree’ refers, then it refers to trees.

B*. If my word ‘tree’ refers, then it refers to trees. So,

C. I am not a BIV.


Again, you said that "if you were a brain in a vat, your word for trees would refer to simulated trees."

B of Putnam's deduction is that your word refers to non-simulated trees. So if your word for trees refers to non-simulated trees then you are not in a simulation.

Also, Putnam is the individual credited for having first proposed computationalism in its modern form.

Do you think he may have an understanding of the field?
 
That was a joke.

But you have piqued my curiosity. Do you have a link where I could explore this topic further?

I don't see an obvious link, but any quantum theory text would explain how certain events - such as radioactive decay - don't appear to have a direct cause.
 
B of Putnam's deduction is that your word refers to non-simulated trees. So if your word for trees refers to non-simulated trees then you are not in a simulation.

Yeah, that is begging the question.

In fact I think that article you linked specifically states that a criticism of Putnam's position is that that step merely begs the question.

And after all of the nonsense in that article -- the kind of nonsense that makes philosophy a not-so-respected profession, even among academic circles -- the criticism was not addressed.

Also, Putnam is the individual credited for having first proposed computationalism in its modern form.

Do you think he may have an understanding of the field?

No.

And I will tell you why -- any computationalist that actually understands the principles of computationalism knows that you can only build arguments from the ground up. You start with simple things, that are well defined, and figure out how to get to vastly more complex behavior from those humble beginnings.

Here, Putnam is injecting some high level philosophical jargon that is ill-defined at the computational level -- truth, among other things -- and trying to refute computationalism with some philosophical hand waving.

What would putnam say if we told him "stop talking philosphy, start talking physics. If you have a group of particles, how can they determine the original nature of an indirect cause? How does a molecule of chlorophyl know whether a photon came from the sun instead of a growing light providing the same wavelengths?" Eh? What would he say? What would you say?

The response "well, we know about philosophical truth" is just philosophical hand waving. It is saying "don't worry about the physics, the reality, just focus on the apriori assumptions philosophy lets us make." Don't you realize that the proposition that our neurons can somehow discern the nature of what is activating them is tantamount to dualism? This is proposing that something else besides the observed physical world is taking place.

Sorry, that isn't how computationalism works. And that means you can't argue against computationalism using philosophy. That is like a tribal shaman trying to shoot down a helicopter with spirit prayers or whatever nonsense he can come up with. His best bet is to just pick up a rock and throw it.

So for Putnam to all of a sudden think he can shoot down computationalism with something that doesn't even make sense scientifically illustrates that he doesn't know what computationalism entails any more.

This is what computationalism is all about -- illustrated by a quote from Sherlock Holmes:
If you eliminate the impossible, whatever remains–however improbable–must be the truth
You don't argue against such a thing by saying "well, it is really improbable."
 
Last edited:
Status
Not open for further replies.

Back
Top Bottom