Belz...
Fiend God
I don't have the inclination to teach you about electricity and biology right now, as it would result in a derail.
Translation: "I don't know anything about electricity or biology."
I don't have the inclination to teach you about electricity and biology right now, as it would result in a derail.
A camera can do that now. We can get an inexact impression of someone else's subjective experience right now.
Translation: "I don't know anything about electricity or biology."
You're begging the question.If one considers that consciousness is not emergent from the action specifically resulting in the computation in the brain, but some other biological activity in the brain.
Thank you!
I was starting to feel hurt that no one was responding with "nonsense" or "you don't know what you're talking about."![]()
So our ancestors, the eukaryotes, were aware of their environment billions of years ago?...Our ancestors going back a long way developed awareness of their environment right at the beginning. For billions of years they developed and finely tuned bodies subtly adapted to and aware of their environment...
So you should be able to answer the following questions:I know enough to see the differences between a computer and a brain.
Such as? what significant non-computational biological activity in the brain are you aware of that might be relevant?If one considers that consciousness is not emergent from the action specifically resulting in the computation in the brain, but some other biological activity in the brain.
True enough. So all you need now is some evidence for such activity. What biological activity goes on in the brain that isn't computational in nature and might plausibly be relevant to consciousness?Then it would not follow that a computer simulation of that same computation would include consciousness of the same kind. Because the other biological activity would not be replicated.
So our ancestors, the eukaryotes, were aware of their environment billions of years ago?
Genus homo has only been around about 2+ million years.
So you should be able to answer the following questions:
Such as? what significant non-computational biological activity in the brain are you aware of that might be relevant?
True enough. So all you need now is some evidence for such activity. What biological activity goes on in the brain that isn't computational in nature and might plausibly be relevant to consciousness?
The god of the inherent imprecision.
Huh ?
Who cares about language ? What part of "the machine reads your mind and translated it into data one can read" don't you understand ?
You're begging the question.
I want to add on this one, because I also have for most of the time a problem when the brain is compared to a computer. I find it highly misleading. One example is that we have not yet established that neuronal firing relates to an on-and off state of the activity- like in electronics. A transmission might have much more to it than just 0 or 1.
It might have been addressed already in this thread, as I am new here and do not have the time to go through the thread but how do you define computational activity?
"Subjective experiences are not real" is a kind of evaluation of it.
But subjective experiences should have a mechanism, and that mechanism should have a causal relation to your describing them with the term "subjective experience". Our robot can use this as a basis for applying the term "subjective experience" to a meaning. At an absolute worst case, it would include too many things, and this ill-defined extra that you think is critical to subjective experience would remain unknown.
But there should also be a reason that the robot can figure out, in the causal connections--at least in theory--for why you say that there is this ill-defined extra. If that reason is, in fact, based on the actual ill-defined extra, then the robot is actually in a better situation of saying what subjective experience really is than you are, even with your "cheat" of actually experiencing it. And if it is "not", then there is no such ill-defined extra.
If you see a third possibility, then please, point it out.
But the tools we use to make judgments are, in practice, applied automatically.
You're being a very poor devil's advocate. I think I can guess the reason you're going along these lines--it's because you want to remove the analogy of "looks like a penny", because you think I'm defining subjective experience. That is, in fact, not what I'm doing--I'm playing the devil's advocate, and granting that the robot has no subjective experience, but trying to show you that even in that scenario, the robot winds up having a perfectly good theory of what subjective experience is.
What you're trying to do, I'm guessing, is to make the robot's evaluation seem to me to be that much further removed from experiencing, by associating it with things like objective measures. However, in doing so, you are ironically weakening your argument. Here's why.
Suppose your robot does analyze pennies and photographs this way. Well, we do not. We come to the conclusion fairly quickly that the penny behind the curtain is a real penny, but the penny in the photograph is just a photograph of a real penny. Now let's say that the robot hears us make this "magical" claim that we can tell real pennies from photos of pennies without weighing them--just by looking at them. Let's further suppose that the robot doubts us, and performs a simple test of our alleged capabilities using something very similar to the MDC.
Now here's the problem. We pass the test.
Now what is the robot to do with our claims of "magical" capabilities of divining real pennies from photos of pennies? At this point it must conclude that there should be some underlying mechanism that we use that allows us to make this determination, even though it does not know exactly what this mechanism is. Remember the plotting machine making a star? Just like that.
And also similar to this machine, we make a claim that we use Subjective Experience technology <TM> to perform this analysis. Well, the robot doesn't know exactly what Subjective Experience technology is, so it goes about opening our heads and figuring out how we really do it.
You can guess the rest of the plot... I keep repeating this story.
I'll grant that it doesn't equate if you note that I never said that it equates. I still reach my conclusion that the robot doesn't arrive at a claim that subjective experience doesn't exist (with the caveat that it might, but depends on the robot and the line of inquiry; nevertheless, I still maintain you should check that robot's warranty).
You're missing the point of what I'm trying to do... when I say "at least analogous to" in the latter post, I'm not trying to claim that they "are" subjective experiences in themselves. My point is about how an entity that is minimally capable of doing so associates a word to a meaning--it has to map this word to some set of invariants that applies to the way that the word is used. The analogy here is so tight that there's no reason for the robot to doubt that subjective experiences actually exist--unless you can make some specific claim that differentiates what you surmise that the robot would get "subjective experience" confused with, and what subjective experiences actually are.
And that's what I was looking for when I asked you to make a claim about subjective experiences that you think the robot would disagree with. In the latest reply, you suggested that this was probably impossible. Well, the implications of it being impossible is that the robot would think it does exist, and that it's no different than the kinds of things it thinks it might be.And how would the objective robot map the meaning of "subjective experience"? How would he define it?
I believe that it's possible for a human being to tell another human being what he means by subjective experience. I question whether it's possible to provide an association for subjective experience that would be meaningful for an entity that didn't possess it. It is possible that this makes the conjectured objective robot a practical impossibility. I don't insist on this, but I think it at least possible.
And furthermore, who is to say the robot's guess is wrong? Shouldn't the subjective experiences actually correlate to something real, that is part of the causal chain, assuming they do exist?
Doesn't matter. The whole point is, "subjective experiences are not real" is a type of claim. As I said before, you're severely underestimating what it takes to claim this--even for "objective robots".
As are experiences.
Then you haven't been keeping track of what punshhh is arguing.
ETA: This starts around, roughly, post #625?
I know enough to see the differences between a computer and a brain.
If one considers that consciousness is not emergent from the action specifically resulting in the computation in the brain, but some other biological activity in the brain. Then it would not follow that a computer simulation of that same computation would include consciousness of the same kind. Because the other biological activity would not be replicated.
I do understand it. It's not that complicated. We get someone to look at a red wall and then measure his brain, and notice that the brain is different when he's looking at a red wall compared to a blue wall. What I don't understand is why I'm supposed to be astonished at this. I could already find out what kind of wall he's looking at by asking him.
Even if you could identify such an activity, even that activity could be simulated in a computer.If one considers that consciousness is not emergent from the action specifically resulting in the computation in the brain, but some other biological activity in the brain. Then it would not follow that a computer simulation of that same computation would include consciousness of the same kind. Because the other biological activity would not be replicated.
Comparing the brain to a computer, like all analogies, is not going to be perfect. It doesn't have to be. As long as the general principles of the argument are able to be communicated, we can still use the comparison to that limited degree.I want to add on this one, because I also have for most of the time a problem when the brain is compared to a computer. I find it highly misleading. One example is that we have not yet established that neuronal firing relates to an on-and off state of the activity- like in electronics. A transmission might have much more to it than just 0 or 1.