The Hard Problem of Gravity

You're killing your own argument. What you are calling a computer isn't all that a computer is.

And my point remains. We are computers. Everything you say about computers, if you just say "people" instead, is equally applicable. You're trying to show a difference by appealing to your own prejudices that we are in no way obliged to have ourselves.

Yes, we can act as computers - but if we act purely as computers we don't thereby get understanding.

Nothing in real life is a pure computer, of course. It's an abstract concept.
 
Yes, we can act as computers - but if we act purely as computers we don't thereby get understanding.
Let's go through this using a different approach.

You're trying to claim that your opposition is unjustified in making metaphysical claims about computers.

What I have highlighted is an implicit metaphysical claim you are making about computers that I am claiming you are unjustified in making.

Questions?

Edit: Oh, and it's not just that we "can act as" computers. We are computers.
 
Last edited:
If we observe a property in exactly one place, then that's what we should assume possesses the property.
I'm not sure what this has to do with the point at hand.

Why do you assume that just because consciousness has not yet appeared except in humans that it can't be produced?
??? That's what I said.

The other is to assume that one can produce consciousness in some way without knowing exactly what it is or how it works.
Based on what theory? And nonsense and not a premise held by scientists and experts in the field. You are entirely without foundation.

For the record. During WWII the Germans didn't know how to reproduce shortwave radar like the British. They reverse engineered the machine from a downed plane and produced it without knowing how or why it worked.
 
Depends, does the dye rely solely on mathmatical algorythim? If so then no.
The algorithm isn't in the internal workings of the die. The algorithm is to toss the die, and map its results onto the desired range.

It happens to be trivial because the range of the die matches the range of our outcome. Suppose we're using 30^4000 sided die instead. Then we need two tosses. If we had 2 sided die, we need to do something even more complex (assuming we cared about equal distributions).

Tossing here is a special form of output--it's triggering an external event. But it still needs to be done to accomplish the goal, and the goal is still achievable given a series of well defined steps.

Or I could run down the other side. You're claiming that in order to produce my desired output, I need to rely on a physical event. Fine. But I raise you. In order to produce my desired output, I also need (in most cases) an algorithm :).
 
We can be computers - but if that's all we are, we don't understand anything either.

You're killing your own argument. What you are calling a computer isn't all that a computer is.

And my point remains. We are computers. Everything you say about computers, if you just say "people" instead, is equally applicable. You're trying to show a difference by appealing to your own prejudices that we are in no way obliged to have ourselves.


I've just been poking around in this very interesting thread for an hour or so, so this may have been addressed upthread.

yy2bggggs, you're not arguing that:

description(computers) => description(people)

implies

person => computer

are you?

Unless you mean

description(computers) <=> description(people)

but didn't finish the thought, or already have previously.

So when you claim, "we are computers", you mean there is a set of descriptions for "computer" which completely describes "person", right?

:shy: {sorry, it's a minor point}
 
yy2bggggs, you're not arguing that:
...
No. The human brain is a computer. That's just what it is--it is a device that calculates. As a consequence of this, when westprog says that computers can't x, it's trivially false if x is something that our brain does. That's my first argument.

Furthermore, westprog is accusing his opposition of overreaching in their metaphysics, and maybe they are. But he's not being very convincing at showing how not to overreach in your metaphysics when he asserts that computers can't do x. As such, he's killing his own argument.

His particular argument is that people are more than computers. My particular counter is that computers are more than what he calls computers. In order to show that my computer--which is also a transmitter, receiver, lamp, desk, electronic device, heater, etc--does not do what a brain--which is also a living organ, an electronic device, a heater, etc--does, then westprog needs to show that the living brain has something that my computer does not. Since westprog doesn't even know what the human brain has that allows it to do x, how can westprog claim that my computer can't do x?

Those are the two points I'm arguing. Note that I'm not specifically arguing that a human brain is a computer (except for a brief touch here)... I'd like for westprog to outright deny this before I make this point.
 
Last edited:
No. The human brain is a computer. That's just what it is--it is a device that calculates. As a consequence of this, when westprog says that computers can't x, it's trivially false if x is something that our brain does. That's my first argument.

Ok, thanks; I had it twice backwards. Let person = [human] brain, thus:

brain => computer implies ability(brain) => ability(computer).

And I take it you're arguing in principle: brain => computer. (Right now, brains do many things computers don't.)

Furthermore, westprog is accusing his opposition of overreaching in their metaphysics, and maybe they are. But he's not being very convincing at showing how not to overreach in your metaphysics when he asserts that computers can't do x. As such, he's killing his own argument.

If he's arguing computers can't in principle ever do x, that would be a very difficult argument to make. There are, however, many opponents of AI who seem to extrapolate from current limitations to limitations in principle, so he's not alone.

His particular argument is that people are more than computers. My particular counter is that computers are more than what he calls computers. In order to show that my computer--which is also a transmitter, receiver, lamp, desk, electronic device, heater, etc--does not do what a brain--which is also a living organ, an electronic device, a heater, etc--does, then westprog needs to show that the living brain has something that my computer does not. Since westprog doesn't even know what the human brain has that allows it to do x, how can westprog claim that my computer can't do x?

And since the brain is a massively parallel architecture of neurons, and neurons work in principle like logic switches in a computer, anything a brain can do a computer can too, in principle. (Though we'd have to establish that logic switches are precise analogs for neurons, I think, if that is the model.)

Those are the two points I'm arguing. Note that I'm not specifically arguing that a human brain is a computer (except for a brief touch here)... I'd like for westprog to outright deny this before I make this point.

I may have tipped your hand then. ;) (If I follow: the claim "the human brain is a computer" is to exploit and underscore our ignorance of both.)
 
Last edited:
Yes, we can act as computers - but if we act purely as computers we don't thereby get understanding.
Sure, fine. So computers don't understand anything, and neither do we.

Nothing in real life is a pure computer, of course. It's an abstract concept.
Physical instantiations are not ideal computers (the word "pure" is meaningless here). They have finite limits and are subject to error.

So which is it that distinguishes our computers from our brains? Are they insufficiently complex, or merely insufficiently error-prone?
 
So "qualia" are "qualitative experience" are "subjective experience" are "qualia".

How do we know they exist, again ?

The same way that we know anything else exists; we experience it in some capacity.

But don't you see that's another turtle ? "The same way that we know anything else exists" means that EVERYTHING is experienced, INCLUDING qualia. So now the question becomes: do we experience experience ? Where do we stop ? No. Qualia is just a label, but it doesn't represent anything real. The "experience" of seeing a cat is simply a physical extention of the cat itself. There is no "image".

Again, the terms are just labels we put on the actual experiences. Scientifically understanding the experience is what we should be aiming for.

No, no. You are assuming that qualia exist, and then trying to understand them.
 
Strawman. :rolleyes:

Excuse me.... WHO said ANYTHING about "things we cannot detect"? This is a big problem in this forum, as I have always stated.

Zen, it seems you are unaware of what "material" means. "Material" is stuff we can, indeed, detect -- at least in principle. Nobody's trying to figure out what matter is "made of" because it's irrelevant. How it BEHAVES is what counts, which of course comes right back to what we're saying about human consciousness.

Dualists imply that there is some stuf that exists that is NOT material i.e. that we can't detect using material things. Fine, maybe it does, but immaterial things cannot interact with material things so their existence is irrelevant, as well.

So... what did YOU think we meant by material ?

What I find funny about you guys is that you all claim to expect consciousness to be explainable in physical terms but your words betray a latent dualism.
 
So which is it that distinguishes our computers from our brains? Are they insufficiently complex, or merely insufficiently error-prone?

This is a very good point here. When we speak of a non-living thing as "having a mind of its own", we do so when it acts unpredictably. If a car starts every time, fine. If it doesn't start on rainy days, it "doesn't like rain". If it starts some times and not others, it "has a mind of its own". Of course there is or are one or many actual explanations for the behavior, but as we are ignorant of them, it may as well be random for us. Unpredictable. Error, as the statisticians say.

Something that has sufficient unpredictability (especially if its actions affect us, like the car, the computer, or the tornado) will be spoken of as if it were a causal agent--as if it had a mind of its own, and chose its actions. Same with non-human animals (Pavlov's dogs' reflexive responses were too well understood to be chosen; they were elicited); a sheep tends to be more predictable than a goat, so goats have minds of their own, whereas sheep are... well, sheep. Same with humans. When someone is too controlled, they are brainwashed, not possessed of their own will; only when we are unaware of the myriad influences causing and selecting a person's actions do we say they were freely chosen. (Behaviorists have an interesting definition of "intrinsic" motivation as opposed to extrinsic; intrinsically motivated behaviors are simply behaviors, the reinforcers for which we are unaware of.)
 
My model of the mind is that certain cognitive processes cause certain behaviors, but I say nothing about experience itself because assertions of that nature would contribute nothing to what we can observe experimentally, so I consider the nature of experience to be unknowable.

Thank you for illustrating my previous point.
 
Yes, we can act as computers - but if we act purely as computers we don't thereby get understanding.

Understanding. Interpretation. Experience.

You guys aren't short on dualistic words to try and make consciousness "special". When asked to define those words, however, somehow they always apply to computers, too.
 
That's the point we're making. An "experience" IS a private behaviour.
Not really. An experience refers to the notion of a sensation such as the "redness" of red or the "blueness" of blue. A private behavior would merely be a cognitive process that would have the potential of triggering a public behavior like someone saying, "I see the color red." The presence of an experience isn't required to explain the latter, and as we can only know of the latter phenomenon scientifically, we run into a quagmire where there is no accounting for the phenomenon of experience. Even the idea that there exist sensations in the first place can be understood in terms of cognitive processes, and that could open the door up to the possibility that sensations are a nonsense concept invented by the brain to help it organize itself, and if that case were true, it would render consciousness an illusion altogether, so you can't just decide experience and cognition are one in the same because you feel like it and it makes thinking about the epistemological difficulties involved in the situation at hand easier. It wouldn't make sense to equate something real with something imaginary.
 
And I take it you're arguing in principle: brain => computer. (Right now, brains do many things computers don't.)
No. Brains are computers, so if brains do things, computers do them (e.g., brains do them). There are some things that brains do that silicon-based IBM PC compatibles do not do--for example, metabolize glucose. But there's nothing that brains can do that computers can't do, because brains are computers.
And since the brain is a massively parallel architecture of neurons, and neurons work in principle like logic switches in a computer,
Not just like logic switches in a computer. Neurons are logical switches.
anything a brain can do a computer can too, in principle.
...not exactly. Anything a brain can do a computer can too, in practice, because brains are computers.
 

Back
Top Bottom