Easy question. Easy answer. Just pose the question and wait for 113 pages of word salad to be posted. Then scan page 114. If it's still word salad, then the answer is a big fat NO.
If you have a formal education in math, you should sue your college:
Why Malerin is Wrong About Bayes Theorem
Ok, why does this discussion get so personal? I get it with politics, but we're talking about different conceptions of consciousness.
On a related note; the causality paradox. Every effect seems to have a cause, which in turn has another cause. However, it seems that either we must arrive at an uncaused cause, or an infinite regression of causes stretching backward through time. Which is more likely? Discuss.
I was going to forego the foregone conclusion for a little longer, and I must say that I'm not quite sure it's fair to call it word salad - at least not all of it. Some of these guys know more about epistemology and biology and psychology and cybernetics than I ever did or will, and argue their points with vigor, authority and precision. That is what makes the utter hopelessness of agreement so entertaining.
It really doesn't make any difference, in principle. Whether or not animals are conscious - and if they are, which animals - doesn't change anything essential about the discussion. Clearly the mechanics of possible animal consciousness would be similar to that of human consciousness anyway.
Alright, now its clear that you're just being contrary for contrary's sake.
Tell me, in your experience, is the sensation of "cold" identical to, say, the flavor of "bitter"?
Last I checked, I couldn't 'easily see' the sensation of nausea either, but its still a quale.
At least in my experience, abstract concepts are usually manifested in my mind as some learned symbol(s) which encapsulates some sense(s) of the concept(s) I've been trained to associate them with.
Its like organizing subjective tally marks, or beads. It doesn't matter which quale or combinations of qualia one employs,; so long as they are ordered and manipulated property in one's awareness they will suffice.
AMM: "'Numbers' aren't a postulated hypothetical but a categorical label of for an indisputable given--"
Belz...:"Hmph! Your say-so doesn't make it true x-P "
![]()
Some theist said:Me said:Some theist said:God exists
Your say-so doesn't make it true.
Me: Oranges exist
You: Your say so doesn't make it true.
But that's the whole point.
If animals are conscious but at a lower "level" than humans, then it's not entirely possible that other things (computers) could be considered conscious as well. This is why I said what I said: it's important to know what we mean by "conscious" in discussions like this, and stick to it.
Ok, why does this discussion get so personal?
I find it very surprising that you say you don't know what a BF quine or BF self-interpreter are, especially I understand you are a programmer. In case it really is news to you, brainf*** (masked here but quite often abbreviated to BF) is a simple yet Turing complete esoteric programming language (including some variations). I've mentioned it and the quine and self-interpreter in a number of other posts both in this and in the washing machine thread so thought you would have picked up on at least one of those earlier posts.I have no idea wtf a BF quine self-interpreter is.
Thanks for trying but what were you on when you wrote this? Or perhaps it was meant to be a joke of some kind? (I must admit I laughed on my first reading.)I think in terms of systems of particles.
Suppose you have three systems of particles A, B, and C. Further suppose that C can be anything in the rest of the universe if need be, it doesn't affect the argument.
Suppose the behavior of A is dependent upon the behavior of B such that when B is in a certain subset of states the state of A converges to state a1 and when B is in another subset of states the state of A converges to state a2. In other words
State of B is in { 1, 2, 3, 4, .... n } == A converges to state a1
State of B is in { n + 1, n + 2, ..... m } == A converges to state a2
(for the sake of convenience the states are merely labeled as integers above )
Further suppose that B is in a certain configuration that allows it to "interface" with other systems -- maybe like an enzyme interfaces with molecules that may or may not be one of the substrates they catalyze, whatever. And suppose that when B interfaces with a certain set of systems -- any one of which can be called C, above -- B is put in one of the states belonging to the first set above, { 1, 2, 3, 4, .... n }.
Finally, suppose that B is put in a state belonging to the second set above, { n + 1, n+2 .... m } IF AND ONLY IF B interfaces with A.
What does this mean? It means A will ONLY ever be in a state that converges to a2 if B is interfacing with A itself.
That is the fundamental idea of self reference. In this situation B is the "reference" and it can be referencing "self" or "non-self" from the point of view of system A. Of course A doesn't think "self" because it can be a very simple system that doesn't think at all -- it just behaves differently when B is interfacing with itself vs. anything else.
Note that there MUST be the second set of behaviors -- when B is NOT referencing self -- for self reference to apply. There is no such thing a self when there is no non-self. Both must exist.
Please ask if you have any questions.
In any case, what I was actually hoping for was a simple, clean programming example to demonstrate SRIP in the form that you and Pixy claim is logically equivalent to consciousness (and therefore also has some kind of subjective experience while running even if that might be very limited and not necessarily at all like anything I experience).
dlorde said:OK, that's a start.Yes, for the definition of consciousness I used in my last post.
We can recognize consciousness through behavior which indicates the ability to differentiate abstractly between subject and object.
Behaviourally an ant can differentiate between self and other, and can distinguish various kinds of 'other' and treat them accordingly; some level of abstraction is necessary for this behaviour. Is this differentiating abstractly between subject and object? If not, can you suggest a simple example?
Just to be clear what do you mean by "object persistance"dlorde said:Children generally develop a sense of object persistence at around 8-12 months.Memory seems to play an important role in this ability.
The ability to become aware of a continuity of my relationship with an object even when its no longer experienced.
I would be interested in your reference for this please.dlorde said:Competition for favoured items is generally present by around 18 months...When children start claiming objects as theirs not just when they are experiencing it, but when someone else might.
It is a gradual process to a full realization when a child uses "I" for the first time. "I" is the only concept which we cannot learn to use as such through copying adults. We learn to use it when we become fully conscious of our own thoughts. The "I" in a way is synonymous with "consciousness".dlorde said:There are many stages of cognitive and social development in the first two years; at which stage would you suggest consciousness first appears?
I'll just tell you then:
Hilary Putnam
Wait, there's more:
Putnam's Brain in a Vat thought experiment is what the post you derided is largely based on.
Please try to have a little fun RD. I am.
If meant seriously, I don't think you were having a good day because it's garbled, incomplete and confusing. We have "systems of particles", states, things "interfacing", dependencies, "==" and then suddenly a self and non-self pops up also.
In any case, what I was actually hoping for was a simple, clean programming example to demonstrate SRIP in the form that you and Pixy claim is logically equivalent to consciousness (and therefore also has some kind of subjective experience while running even if that might be very limited and not necessarily at all like anything I experience).
Uncaused events are part of the Standard Theory, and have been for many years.
rocketdodger said:I'll just tell you then:
Hilary Putnam
Wait, there's more:
Putnam's Brain in a Vat thought experiment is what the post you derided is largely based on.
Please try to have a little fun RD. I am.
Ah, I see.
Also, that argument is complete and utter nonsense.
Take, for example, any reconstruction of premise 1:
"If I am a brain in a vat, then it is not true that if my word for X refers to something, it refers to X."
What does that even mean? If you were a brain in a vat, your word for trees would refer to simulated trees. The existence of trees in the outside world are irrelevant.
In fact, Putnam's premise that the vat complex pre-existed, and "nobody" programmed in any relationship between real trees and simulated trees, invalidates his whole argument. How could information limited to the vat refer to anything outside the vat if nobody programmed such a link in?
Utter nonsense. Of course, so is all the rest of anti-computationalism.
A*. If I am a BIV, then it is not the case that if my word ‘tree’ refers, then it refers to trees.
B*. If my word ‘tree’ refers, then it refers to trees. So,
C. I am not a BIV.
That was a joke.
But you have piqued my curiosity. Do you have a link where I could explore this topic further?
B of Putnam's deduction is that your word refers to non-simulated trees. So if your word for trees refers to non-simulated trees then you are not in a simulation.
Also, Putnam is the individual credited for having first proposed computationalism in its modern form.
Do you think he may have an understanding of the field?
You don't argue against such a thing by saying "well, it is really improbable."If you eliminate the impossible, whatever remains–however improbable–must be the truth