• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Has consciousness been fully explained?

Status
Not open for further replies.
Clinging to current reality is "wishy-washy"?

Um...What? No.

Our current reality is a state of profound ignorance about consciousness. If we are honest about that ignorance, it's not wishy-washy. (Can't tell if you had a failure in parsing or a failure in expressing.)

All that's needed is someone to make the argument that computers could in principle have 'subjective' experience. I haven't seen any such argument; nor do I foresee one forthcoming.

You already said this. I already responded in kind. Let's not repeat ad nauseam.

The computational approaches start with denying one's own subjective experience as irrelevant, and stating that observable behavior and some self-reported private behavior are all that determines consciousness.

First, you don't have to deny something to say it's irrelevant.

Second, if you have a quibble with epiphenomenalists, go argue with them.

Third, behavior is the only way we could possibly tell whether something else has subjective experience. Unless you know of some other way, in which case I'd love to hear about it.

I'm just curious: do you think it's possible (in principle, not necessarily NOW) for a non-human computer to have subjective experience?
 
As I've mentioned before, "information processing" is an abstraction. It doesn't happen at an objective physical level.

Piggy, to use the term "abstraction" correctly, you must be able to answer the question "Of what?" Otherwise, you're just being abstruse.

But to demonstrate this point, let's consider a teacher in a classroom who writes on a chalkboard:

2 + 2 =

Then he has a student come up and write to the left of this: 4.

He then draws this beneath the 4:

-3 and puts a line under it.

He calls another student up, who writes below that: 1.

Now, on an abstract level, we can say that some type of "information processing" has gone on here.

But on an objective physical level? No.

All that has happened on the physical plane

I'm well ware this is like calling someone a Nazi here, but you're coming off as very dualist. The more you maintain that IP is entirely removed from physical reality, the more you're forcing into the realm of res cogitans.

is that neurons have fired, muscles have moved, some chalk has come off onto the blackboard and into the air, that sort of thing.

The IP is entirely abstract and dependent on our perception of it.

This bears repeating: The IP is entirely abstract and dependent on our perception of it.

To deny this is to talk nonsense.

Similarly, consider a woman adding on an abacus. After an extended process of flipping beads, she gets her results.

But once again, The IP is entirely abstract and dependent on her perception. Objectively, all that's happened is that neurons have fired, muscles have moved, beads have changed position.

To make the example even more clear, let's consider a computer crunching numbers. Say the process takes half an hour.

Meanwhile, it turns out that all life on earth happens to have been infected with a strange, fatal virus that will -- for some reason -- be triggered to unleash itself simultaneously, killing everthing within the space of a minute, and that this virus is triggered 15 minutes into the computer's number-crunching.

All life is dead. But for the next 14 minutes, the computer happily hums along, then a pattern of pixels appears on the screen.

In this case, has there been any information processing?

Well, what's the alternative?

Say I start my computer rendering a complex 3D scene, then go to work for eight hours, then come home and look at the rendered scene.

Are you saying that while I'm at work, my computer is *not* processing information because there's no one there to watch it? Are you claiming maybe that all the IP occurs the instant I look at the rendered scene? This is patently bizarre.

No. All that's happened is that the state of the computer's components has fluctuated. No one to interpret it, no IP.

It's worse than that! While I'm sitting in front of my computer, I'm not watching it's components change state. I don't see bits flipping. In essence, the computer *never* processes information!

Huh. So my computer doesn't process information. It's not even a "computer" at all, since it's not really computing anything.

IP is an abstraction we overlay onto objective reality, not an objective physical reality itself.

You were merely circling the drain of dualism before, now you've totally fallen in.

So it's an error to label the brain an "information processing engine". It's a chunk of matter that does what matter can do. Chain reactions and such, like you said. We can think of it abstractly as an info-processor, but if we make the error of thinking that IP is what it is literally doing physically, we're going to come to wrong conclusions.

Again, what is information processing an abstraction OF?
 
So? He's wrong. Church-Turing thesis. It proves mathematically that he's wrong. It is a mathematical fact that anything the brain can do, an artificial neural network can do, and anything an artificial neural network can do, a stored-program computer can do. Or a Turing machine, or lambda calculus, or recursion, or a whole list of other computational methods. All mathematically identical.

Two things being mathematically equivalent is am important discovery, but, when talking about physical objects rather than mathematical abstractions, it doesn't mean they are the same. A donut and a coffee cup are topologically (a subset of higher mathematics) identical. That doesn't mean that a donut can do the anything a coffee cup can. Methinks you have some IP still to do on this subject.

You're sounding a lot like Interesting Ian.
I was just thinking the same thing.

I've already explained it in layman's terms, but you have counterfactual beliefs stuck in your brain that you need to dislodge, and it appears the only way to do that is for you to work through the details.
This, especially, sounds a lot like II. The problem is not PixyMisa's explanation. Anyone who has read his posts and still doesn't understand has problems like wrong beliefs stuck in their brain.

By the way, I've read GEB from cover to cover, though it was some time ago. While a fascinating book in it's own right, I don't recall that it supported what you are claiming here. It's a big book though. Could you be more specific about how it relates to this discussion and what parts provide support for your claims?
 
All that's needed is someone to make the argument that computers could in principle have 'subjective' experience. I haven't seen any such argument; nor do I foresee one forthcoming.

The computational approaches start with denying one's own subjective experience as irrelevant, and stating that observable behavior and some self-reported private behavior are all that determines consciousness.

This is incorrect.

I have taken the position from day one that everything has subjective experience, including individual particles.

But people still reject that because they can't wrap their head around the idea that the subjective experience of a particle is vastly different from that of a human, and so in fact the label "subjective experience" is meaningless.

That is the problem -- people think all subjective experience must be like a human's, and that just isn't true. So what good does it do to throw the term around in the first place?

For instance, do you think the subjective experience of a dog is even remotely similar to ours? Or a bird? Where do you draw the line in "sameness?" How would you even know where to draw that line?
 
Last edited:
[Bolding mine.]

...Or to allow that non-human computers have subjective experience.

In a situation where we don't (and possibly can't) have knowledge one way or the other, it's curious that people are so quick to claim a confident position. Is it so that they don't appear wishy-washy?

I think people seem confident because it seems intuitive that a machine cannot have subjective experience without a known mechanism for producing it. Maybe you don't think it is intuitive, but if you do and there is no evidence either way, the burden of proof falls on you.

Intuition is certainly not always correct and I could easily be persuaded by an argument that my intuition is wrong, but I can't accept that it is "just so" that all machines have experiences.
 
Two things being mathematically equivalent is am important discovery, but, when talking about physical objects rather than mathematical abstractions, it doesn't mean they are the same. A donut and a coffee cup are topologically (a subset of higher mathematics) identical. That doesn't mean that a donut can do the anything a coffee cup can. Methinks you have some IP still to do on this subject.
Yes, a donut and a coffe cup are topologically identical. But that is a completely different class of identity to that established by the Church-Turing thesis, which shows that all ideal computers have the same computational power.

What this means for physical computers - like a desktop PC or a human brain - is that substrate is irrelevant, only the size counts.

I was just thinking the same thing.
That's nice.

This, especially, sounds a lot like II. The problem is not PixyMisa's explanation. Anyone who has read his posts and still doesn't understand has problems like wrong beliefs stuck in their brain.
Not necessarily. If Piggy were asking different questions or raising new objections, then the conversation could progress.

He's not. He's raising the same objection, over and over, long after it has been demonstrated to be wrong.

So rather than continuing I directed him to some convenient reference material on the subject. Which he has similarly rejected out of hand.

Whatever.

By the way, I've read GEB from cover to cover, though it was some time ago. While a fascinating book in it's own right, I don't recall that it supported what you are claiming here. It's a big book though. Could you be more specific about how it relates to this discussion and what parts provide support for your claims?
...

Okay, I'm glad you enjoyed it, but you really, really need to go back and read it again with this in mind, because the idea that consciousness is self-referential information processing is what the entire book is about!

Hofstadter wrote a more recent book, I am a Stange Loop, as a follow-up to GEB because people somehow missed that point.

From the Wikipedia page for the new book:

Wikipedia said:
“In the end, we are self-perceiving, self-inventing, locked-in mirages that are little miracles of self-reference.”

— Douglas Hofstadter, I Am a Strange Loop p.363

Hofstadter had previously expressed disappointment with how Gödel, Escher, Bach, which won the Pulitzer Prize in 1979 for general nonfiction, was received. In the preface to the twentieth-anniversary edition, Hofstadter laments that his book has been misperceived as a hodge-podge of neat things with no central theme. He states: "GEB is a very personal attempt to say how it is that animate beings can come out of inanimate matter. What is a self, and how can a self come out of stuff that is as selfless as a stone or a puddle?"

He sought to remedy this problem in I Am a Strange Loop, by focusing on and expounding upon the central message of Gödel, Escher, Bach. He seeks to demonstrate how the properties of self-referential systems, demonstrated most famously in Gödel's Incompleteness Theorem, can be used to describe the unique properties of minds.
 
Last edited:
I think people seem confident because it seems intuitive that a machine cannot have subjective experience without a known mechanism for producing it. Maybe you don't think it is intuitive, but if you do and there is no evidence either way, the burden of proof falls on you.
Well, except there is a known mechanism, and we know that machines have subjective experiences, both theoretically and behaviourally and because we can look inside them and watch it happening.

I'm really not sure what more you want.
 
Two things being mathematically equivalent is am important discovery, but, when talking about physical objects rather than mathematical abstractions, it doesn't mean they are the same. A donut and a coffee cup are topologically (a subset of higher mathematics) identical. That doesn't mean that a donut can do the anything a coffee cup can. Methinks you have some IP still to do on this subject.

It's so blatantly obvious that a Turing machine (which is a mathematical abstraction anyway) can't do everything that a brain does, that we have to just guess at what is actually meant. I figure that it means that any Turing computation that a brain can do, a (universal) Turing machine can do.

I was just thinking the same thing.


This, especially, sounds a lot like II. The problem is not PixyMisa's explanation. Anyone who has read his posts and still doesn't understand has problems like wrong beliefs stuck in their brain.

By the way, I've read GEB from cover to cover, though it was some time ago. While a fascinating book in it's own right, I don't recall that it supported what you are claiming here. It's a big book though. Could you be more specific about how it relates to this discussion and what parts provide support for your claims?

Pixy, in a previous runoff of this discussion, gave a reference to Church-Turing that quite explicitly contradicted what he was claiming. It didn't slow him down one bit. He carried on insisting that it supported his position - and it was just my bad brain problem.
 
I think people seem confident because it seems intuitive that a machine cannot have subjective experience without a known mechanism for producing it. Maybe you don't think it is intuitive, but if you do and there is no evidence either way, the burden of proof falls on you.

Intuition is certainly not always correct and I could easily be persuaded by an argument that my intuition is wrong, but I can't accept that it is "just so" that all machines have experiences.

See above. The Rocketdodger line is that everything has subjective experiences.
 
Hey, PixyMisa, sorry I went all off into tangent-land again.

Tell you what, I'm going to spend some more time with your references (thanks for providing) and zap you a PM when I come back and we can pick up.

Believe it or not, I actually am trying to understand what you're saying -- and let me reiterate, I know I could be wrong about all this -- but once the issues start compounding, I get all outta shape.

Gotta take me a Piggy tranq.

So I'm still holding at the issues of "What is info processing?" and "What is computation?" in order to flesh out the claim "Your brain is (in everything it does) a computer".

In the meantime, tho, I'd be interested in hearing your response to this question:

Since the product of information processing is information, how can consciousness be the product of IP, given that it's a bodily function, not information?

Cheers -Piggy
 
Well, except there is a known mechanism, and we know that machines have subjective experiences, both theoretically and behaviourally and because we can look inside them and watch it happening.
'Watching things happen' and 'assigning subjective experience to that process' is a bridge too far.

The only subjective experience you'll ever know exists is yours, and it has no theory or behavior we, or you, can directly assign to it.
 
I'm well ware this is like calling someone a Nazi here, but you're coming off as very dualist. The more you maintain that IP is entirely removed from physical reality, the more you're forcing into the realm of res cogitans.

Oh, I don't believe for a moment that it's "entirely removed from physical reality".

Look at it this way.... Consider these very real events:

  • My cousin marries his girlfriend
  • The Atlanta Braves win a baseball game
  • Eisenhower is sworn in as President

Are these abstractions, or are they events in objective physical reality (OPR)?

The former, obviously, and not the latter.

Walking down the aisle, catching a ball, putting a hand on a Bible... these are events in OPR.

But you can study the physics all you like and you'll never detect any of the events in the list up there, because they only happen because we all agree that they happen.

From where I sit, IP is in the same category.

A computer changes states, a pixel pattern appears on a screen. That happens in OPR.

But "adding numbers" only happens because we agree that it did. It's an abstraction. Entirely tangled up with the OPR events, but nevertheless an abstraction.
 
PixyMisa said:
we know that machines have subjective experiences, both theoretically and behaviourally and because we can look inside them and watch it happening
'Watching things happen' and 'assigning subjective experience to that process' is a bridge too far.

And this, of course, is PixyMisa's most blatant, and fatal, error.

It goes back to the circular argument I mentioned before.

This is what you get when you arbitrarily (and incorrectly) define conciousness as self-referential IP.

Machines have no subjective experiences because they don't have the mechanism to produce them. So far, the brain is the only machine we know of that does this, and synthetic machines are not set up to do it. (We are unable to set them up that way b/c we haven't yet figured out what the brain is doing in that department.)

Equally false and fatal is the denial that we all have direct evidence of Sofia (a sense of felt individual awareness).

If we imagine PixyMisa asleep and dreaming at 5:45, and the dream ends 2 minutes later, then his alarm goes off at 6:00 and he gets up and puts on a pot of coffee, then according to him, there must be no difference in his experience at 5:45, 5:55, and 6:05.

But we all know that there is a difference. The brain is doing something -- something qualitatively different -- at 5:45 and 6:55 that it is not doing at 5:55.

This is so obvious, that denial of it can only be classified as "pining for the fjords".
 
Say I start my computer rendering a complex 3D scene, then go to work for eight hours, then come home and look at the rendered scene.

Are you saying that while I'm at work, my computer is *not* processing information because there's no one there to watch it? Are you claiming maybe that all the IP occurs the instant I look at the rendered scene? This is patently bizarre.



It's worse than that! While I'm sitting in front of my computer, I'm not watching it's components change state. I don't see bits flipping. In essence, the computer *never* processes information!

Huh. So my computer doesn't process information. It's not even a "computer" at all, since it's not really computing anything.



You were merely circling the drain of dualism before, now you've totally fallen in.



Again, what is information processing an abstraction OF?

Say you don't switch your computer on. Is it still processing information? In the physical sense, yes. It's heating up, cooling down - molecules are exchanging enormous amounts of information all the time. The only difference when you run your 3D rendering program is that a tiny, tiny subset of all this information flying around becomes meaningful to you.
 
Computation is the manipulation of symbolic representations, and the switching of further actions based on the results of the manipulation.

You might ask "symbolic representations of what"? The answer is: Of anything.

This is what the brain does, Piggy. It's all symbols. Photons strike your retina and are represented as electrical signals in the optic nerve. These signals pass through to the primary visual cortext which produces a one-to-one spatial map in neurons of the visual field. Indeed, this map is so direct and precise that we can examine it with an FMRI and read the text you are looking at.

I'm sorry, Pixy, but I don't see any "symbols" involved in what the brain actually does in OPR.

"Symbols" are shortcuts we use to describe the activity because it's so complex.

But I don't see any evidence of any actual "symbols" in the brain.
 
Status
Not open for further replies.

Back
Top Bottom