PixyMisa
Persnickety Insect
...Sure there's a way.
Can I buy a vowel?
I did mean "computable to arbitrary precision" rather than "strictly computable"; certainly there are processes that are not strictly computable.
...Sure there's a way.
I'm thinking along the lines of RNG's.Can I buy a vowel?
Not sure precision is always meaningful in this context, but I was just being annoyingly pedantic.I did mean "computable to arbitrary precision" rather than "strictly computable"; certainly there are processes that are not strictly computable.
Yep. We can simulate RNGs (and simulate the distribution of a naturally random process to arbitrary precision) but they aren't computable.I'm thinking along the lines of RNG's.
Isn't that why we're here?Not sure precision is always meaningful in this context, but I was just being annoyingly pedantic.
Colloquially speaking? Yes.I ask you, can you understand something to exist if you are not conscious?
I'm not sure your experience could result from paper consciousness. I think the paper-entity would know it was a paper-entity.
I suppose the emulators could also emulate a body and the external world for the paper-entity.
~~ Paul
I can only deal with what he says, not try to guess what he means.Searle's point was to rebut AI. Your entire complaint is hyper silly and absurd. To think Searle's thought experiment had nothing to do with consciousness when the entire point was to show why the Turing test could never demonstrate consciousness.
That Searle didn't mean consciousness when he said "understand" though how one could understand without being conscious you won't tell us.
And he was making it in terms of the concept of "understanding", which he later clarified to relate to intentionality.Clearly Searle is making a case against strong AI. Right?
But don't you think that a Turing machine can simulate a coin toss to an arbitrary degree of randomness?
~~ Paul
lol.
Alright, lets play this game a different way.
Can you give me any examples of a decision that a human makes that does not satisfy this mathematical definition of decision?
Wrong.The Algorithmic theory of consciousness asserts that the entire experience is due to the execution of the algorithm. Interaction with the environment, time, the physical means of executing the algorithm - all are totally irrelevant.
Human prediction is fallible. BTW: The Leobner prize is pushing us toward that very thing. The human brain is in a number of ways the most complex puzzle science has studied. It's going to be awhile. That it will take some time isn't proof of anything.
That's cool but bear in mind you've provided no justification for your prediction and the experts in the field aren't scratching their heads and throwing in the towel. IBM has committed significant resources. Perhaps you know something they don't.
So you keep saying but I have no idea what you mean by "self aware"
OK, let's go through this again.
Is the computer programmed car aware of it's environment?
Can it be aware of another car in it's environment?
By the way, when I asked if people thought it was possible that the moment they are experiencing right now could in reality be a billion years of writing numbers on paper, I only received one confident "yes".
What about the others?
By the way, when I asked if people thought it was possible that the moment they are experiencing right now could in reality be a billion years of writing numbers on paper, I only received one confident "yes".
What about the others?
I was Robin's 'one confident "yes"'.Pixy seems to disagree. Though maybe it's the Pixy-bot writing "wrong" after my posts.
I have seen some "easier to understand" ones, but I don't have links. If I remember correctly many reviews of the book go over simpler refutations as well.
That one you posted is incidentally the one I have bookmarked, lol.
Despite my doubts about "replacing neurons with non-biological switches" I don't doubt that what you describe could well be possible however is it likely? Well I simply have no way of making such a call so it falls into one of those "good premise for a science fiction story" (and it had been used as such in at least one novel, Permutation City by Greg Egan) but otherwise I see no utility in the idea.
The human brain doesn't execute an algorithm. The function of the human brain can, however, be approximated arbitrarily closely by an algorithm.The utility of the idea is not in the likelihood that it's actually happening - it's in the concept that if an algorithm is equivalent to the algorithm that is executed in a human brain, then any execution of that algorithm is entirely equivalent to the human experience.
It would be experienced out of order as measured against an external clock, but any such clock is by definition not accessible to the experiencer, who therefore has no way of knowing this.Incidentally, Egan makes the point that not only can the algorithm be executed in any physical way, it can be executed in any order, and it will still be experienced normally.
So, where exactly is the logical contradiction?Personally I find the argument mainly useful as a reductio ad absurdum.
Ah, got it.PixyMisa said:The paper entity (in this hypothetical) is a replay of my subjective experience, so it would think what I thought, feel what I felt.