• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Has consciousness been fully explained?

Status
Not open for further replies.
After studying birds.

What, did you think they just threw a bunch of parts together?

Where did I claim that ?

Obviously, it's a iterative process. First you study, then you attempt to build one, and see how far you get. Based on the observed differences, you find the holes in your understanding. You study some more, and then you build an improved model. Etc. etc...

ETA: also note how the Wright Brothers didn't copy birds to every last detail. They only captured the essential principles of flight, and used them in a different way. You can do the same thing with neurons. The fact that neurons are alive may be as relevant to consciousness as flapping feathers are to flight.
 
Last edited:
The functional behavior of a single neuron -is- all of its biological functioning -- which includes whatever "magic" it does to produce conscious experiences.
A single neuron doesn't produce your conscious experiences, any more than a single transistor runs programs. It's the way millions or billions of them interact that does that.

If that were sufficient to produce the behaviors you seek to replicate we wouldn't even be having this discussion. We'd already have automatons based off of the kind of modeling you're suggesting that produce behaviors indistinguishable from living conscious entities. Like I mentioned earlier, you're not the first to think of such a scheme and AI researchers have been going that route for decades. Nothing even remotely resembling the results you suggest have been forthcoming.
So far we haven't put enough artificial neurons together, and getting the large scale architecture functional is difficult. The structure that emerged from evolution is not designed, and consequently is often difficult to follow & understand.

It's worth considering the contrast between the behavioural complexity exhibited by the interaction of small numbers of elements and that of very large numbers (millions or billions). Take Conway's Game of Life - most people find it interesting that simple rules can give rise to such surprises as the 'Glider Gun' pattern, and are amazed by the large 'Breeder' pattern that moves along generating Glider Guns as it goes. But few people are aware of the Universal Computer/Constructor - a Life pattern that can be programmed to perform arbitrary calculations and optionally to construct other Life patterns according to the results of those calculations, and they'd probably be amazed (as I was) by the Oblique Life spaceship - a truly self-replicating pattern of 846,278 elements that replicates itself in 34 million generations. These took a lot of time and care to design, and consist of around a million trivially simple pattern elements. Compare this to the emergent complexity possible with billions of complex, multiply-connected elements like neurons. Element complexity and big numbers can make a very big difference.

Better yet, why can't we just use that understanding to create an artificial neural net that can even produce a p-zombie version of a dog?
We can and will, just as soon as we discover the right ways & means to connect a few hundred million artificial neurons, and set their weightings appropriately (the last bit is the really hard bit).
 
Could you extrapolate on "reflective activity?" When I read Hofstader's book, I sort of got what he was talking about, i.e., the notion that complexity can arrise from smaller constituents (like his analogy of fractals) and this complexity has textures of meaning that are not apparent in the building block components.
Right, that's the first essential concept, emergent properties.

I understand the creation of the sense of "I", and its utility as a mechanism to order information and meaning.

But the part I keep missing is the jump that doesn't seem to trouble you; that this emergent "I" is the same thing as the "inner light" of subjective experience. In the fractal example, while the fractal patterns are intrinsic to the equations, to make them visible, you need sort of filter that is interpreting the equations and displaying results via some mechanism. Same thing with audio loops, you actually need the sound generation to acheive the loop. Or with video loops, you still need a screen. Where is the equivalent "theater" of conscious experience?
There is no theater. That's exactly the example Dennett uses, by the way - he calls it the Cartesian Theater - and points out that it is impossible that consciousness works that way, because it leads to infinite regress.

The way out of that is the loop.

Within a computer or brain there is no screen. There are millions or billions of tiny elements - logic gates or neurons - connected to one another. Some of these gates or neurons receive signals from the outside world; others forward signals to the outside world; but the vast majority only talk to other gates or neurons.

And if you construct the network so that the subsystem of gates or neurons can look at what it itself is doing, rather than just working on results forwarded from another subsystem, that's consciousness.
 
Why? He is a computationalist.
Probably because he argued violently with me and got along with the anti-computationalist crowd. That's apparently more important than the fact that he agreed with my position, if not my approach, and completely disagreed with the anti-computationalists.
 
Evidence?

Cellular information processing
http://www.nature.com/nature/journal/v460/n7252/edsumm/e090709-13.html

"In this study, we showed that mechanical stress can damage the cytoskeleton but that cells have special machinery that rapidly recognizes the damage and repairs it."
http://esciencenews.com/articles/20...w.cells.rapidly.repair.and.maintain.structure

So if a cell processes information, and can recognize and repair damage to itself (which is much more advanced than a what a washing machine can do, and we know what you think about those)

With me so far?

...Then a cell is conscious! SRIP fails again!
 
Because of the epic smackdown he gave to Pixy the last time I started a poll. (and Frank's conversations with SHADRU, or whatever the lame "AI" block sorting program is called).
His litany of failure, you mean.

So yes, I was right.

Oh, and it's SHRDLU. There's a significance to that.

Edit to add: Fedupwithfaith took issue with my definition, feeling that it was over-broad; he considered that a useful definition needed to be narrower. His failure was not with his position, but his approach to the discussion; relatively little of his input was constructive, or indeed, responsive. Some of it was, as I've noted, but even there all it was, was a disagreement on a point of semantic precision.
 
Last edited:
Cellular information processing
http://www.nature.com/nature/journal/v460/n7252/edsumm/e090709-13.html

"In this study, we showed that mechanical stress can damage the cytoskeleton but that cells have special machinery that rapidly recognizes the damage and repairs it."
http://esciencenews.com/articles/20...w.cells.rapidly.repair.and.maintain.structure

So if a cell processes information, and can recognize and repair damage to itself (which is much more advanced than a what a washing machine can do, and we know what you think about those)

With me so far?
Sure. No self-referential information processing is involved in any of that.
 
Because of the epic smackdown he gave to Pixy the last time I started a poll. (and Frank's conversations with SHADRU, or whatever the lame "AI" block sorting program is called).

Your position is like the owner of a Volkswagen GTI championing the owner of a Mitsubishi Lancer Evolution because he gave an "epic smackdown" to the owner of a Subaru Impreza WRX STI.
 
Within a computer or brain there is no screen. There are millions or billions of tiny elements - logic gates or neurons - connected to one another. Some of these gates or neurons receive signals from the outside world; others forward signals to the outside world; but the vast majority only talk to other gates or neurons.

And if you construct the network so that the subsystem of gates or neurons can look at what it itself is doing, rather than just working on results forwarded from another subsystem, that's consciousness.
"Look"? What with? What will it "see"? Where is "itself"? Where should it find "what itself is doing"?

Consider a small network with (say) eight nodes ("neurons") with all connected to each other bidirectionally. Add another called 'O' (for "output) such that each of the ten nodes in the main network can also send or receive to 'O', plus one more called 'I' (for input) such that 'I' can only send data (it is "read only" and has no activation rule as such) to each of the eight in the main network. Assume that the data sent or received in each step is a single bit, and the whole network is operating synchronously with a fixed step long enough to allow the appropriate processing at each node in each timeslice. Assign whatever activation rules and weights, etc., that are necessary to make this network (ten "neurons" in total) as capable as it can possibly be in terms of producing a subjective experience.

In your opinion, when running:

1. Is this an example of "self-referential information processing"?
2. Could it possibly be having subjective experience?
3. Is it definitely having subjective experience(s)?

If your answer to any of the above questions is "no", then can you please explain why and also indicate what might to be added, changed, or removed in the network to change your answer to "yes".

If any of the answers are "yes" can you please give describe in detail at least one example of the activation rules, weights etc., that allow that to be true.

Thanks.
 
Sure. No self-referential information processing is involved in any of that.

Of course not, even though it detects damage and repairs itself :rolleyes:

But an advanced washing machine detects what? A full load? Hmm, I'm detecting a load right now!

You know what the ultimate sign of cowardice is? Putting someone on ignore cause they don't agree with you. But you wouldn't know anything about that, would ya ;)

ETA: You make so many unsupported assertions, I forgot to make the standard complaint against you: you actually need to explain what you posted.
 
Last edited:
Wow - thanks for posting that link.

Ichneumonwasp has made several recent comments suggesting that Pixy's SRIP is actually some kind of (re-)definition of consciousness and that made me start looking further afield trying to find the source. I realise that the linked thread is still probably far from the original source but I hadn't got to reading that one yet and it certainly adds some useful background.

I agree with many (if not all) of FUWR's (and others') criticisms of Pixy's posting/argument style. It's not realistic to expect newcomers to these forums (consciousness threads in particular) to have read everything that has come before, including all earlier threads where someone (Pixy for example) may have posted, and so there needs to be some repetition of key information or links back to the more important parts of "history".

If you read this Pixy, you may believe your meaning is clear when you repetitively and baldly post something like "Conscious is Self-Referential Information Processing" but in my opinion it is not. To me (at least for the first few readings) it sounded like some kind of strident conclusion without any real argument to back it up, but now I see that you are simply repeating the definition you have chosen to use (X years ago?). Do you have a pointer to where you first gave (and perhaps argued for?) this definition?
 
Wow - thanks for posting that link.

Ichneumonwasp has made several recent comments suggesting that Pixy's SRIP is actually some kind of (re-)definition of consciousness and that made me start looking further afield trying to find the source. I realise that the linked thread is still probably far from the original source but I hadn't got to reading that one yet and it certainly adds some useful background.

I agree with many (if not all) of FUWR's (and others') criticisms of Pixy's posting/argument style. It's not realistic to expect newcomers to these forums (consciousness threads in particular) to have read everything that has come before, including all earlier threads where someone (Pixy for example) may have posted, and so there needs to be some repetition of key information or links back to the more important parts of "history".

If you read this Pixy, you may believe your meaning is clear when you repetitively and baldly post something like "Conscious is Self-Referential Information Processing" but in my opinion it is not. To me (at least for the first few readings) it sounded like some kind of strident conclusion without any real argument to back it up, but now I see that you are simply repeating the definition you have chosen to use (X years ago?). Do you have a pointer to where you first gave (and perhaps argued for?) this definition?

No prob. Fedup certainly was knowledgeable about the subject. Too bad he left.
 
Of course not, even though it detects damage and repairs itself
Correct.

But an advanced washing machine detects what? A full load? Hmm, I'm detecting a load right now!
Actually, they do more than that, and cars even more. Monitoring the operation of the mechanism is not consciousness, just awareness. The cell's self-repair mechanism don't even amount to that, because they are localised and specifc.

Monitoring the monitoring process, something that complex embedded applications routinely do, is self-referential information processing.

]You know what the ultimate sign of cowardice is? Putting someone on ignore cause they don't agree with you. But you wouldn't know anything about that, would ya
I didn't put Fedupwithfaith on ignore because he didn't agree with me. In fact, he agreed with me almost entirely. I put him on ignore because he persisted in semantic nitpicking - invalid semantic nitpicking, since he was wrong the entire time - and ad hominem attacks, rather than addressing the topic. Eventually he started addressing the topic, and we made some progress. Then he went back to invalid semantic nitpicking. Then he left.

It's not cowardice of any sort. It was just rational conservation of finite resources.

ETA: You make so many unsupported assertions, I forgot to make the standard complaint against you: you actually need to explain what you posted.
You need to pay attention.
 
Wow - thanks for posting that link.

Ichneumonwasp has made several recent comments suggesting that Pixy's SRIP is actually some kind of (re-)definition of consciousness and that made me start looking further afield trying to find the source.
No, he didn't say that. It's no re-definition. It is what people mean when they talk of consciousness, or at least, at the core of what they mean. What it is, is an operational definition rather than a behavioural one.

I agree with many (if not all) of FUWR's (and others') criticisms of Pixy's posting/argument style. It's not realistic to expect newcomers to these forums (consciousness threads in particular) to have read everything that has come before, including all earlier threads where someone (Pixy for example) may have posted, and so there needs to be some repetition of key information or links back to the more important parts of "history".
My advice is to read Godel, Escher, Bach. If you're a programmer (except apparently for Westprog), you'll alrady understand. If you're not a programmer, there is a huge amount of groundwork to establish first. Godel, Escher, Bach establishes that groundwork in a way that is accessible to the layman.

If you read this Pixy, you may believe your meaning is clear when you repetitively and baldly post something like "Conscious is Self-Referential Information Processing" but in my opinion it is not. To me (at least for the first few readings) it sounded like some kind of strident conclusion without any real argument to back it up, but now I see that you are simply repeating the definition you have chosen to use (X years ago?). Do you have a pointer to where you first gave (and perhaps argued for?) this definition?
Probably around 2002 or 2003, and those posts have probably been purged by now (a lot of the old posts got purged a few years back).

Mind you, I have explained this literally hundreds of times, right here in the R&P section. Malerin, AkuManiMani, Westprog and the rest of the gang have all seen it dozens of times, though they still invariably fail to address what I've actually said. The people who accept computationalism accept my definition, at least as a starting point for further development; the people who don't accept computationalism never raise coherent or apposite objections, so I get tired of restating my position and explaining their errors.

Actually, Rocketdodger had a thread that covered it in more detail than I ever have, so it might be best to look for those.

I have to add (again, for the multiple-hundredth time) that this is not my definition, and I doubt even the wording originated with me. It is a distillation of what people mean when they say consciousness, via the fields of computer science and neuroscience, principally through the writings of Daniel Dennett and Douglas Hofstadter.
 
Of course not, even though it detects damage and repairs itself

PixyMisa said:

Of course.

But an advanced washing machine detects what? A full load? Hmm, I'm detecting a load right now!

PixyMisa said:
Actually, they do more than that, and cars even more.

Assertion.

Monitoring the operation of the mechanism is not consciousness, just awareness.

Question begging: you're assuming washing machines are aware, which is exactly what we're asking you to prove. This is what FedUp and the rest of us have been telling you NOT TO DO over and over.

PixyMisa said:
The cell's self-repair mechanism don't even amount to that, because they are localised and specifc.

Irrelevant. The code controlling the diagnostic mechanism on a car is localised and specific to a part of a microchip. The mechanism itself is localized and specific.

PixyMisa said:
Monitoring the monitoring process, something that complex embedded applications routinely do, is self-referential information processing.

Ad Hoc. Self-referential information processing IS (I)nformation (P)rocessing (R)eferring to a (S)elf. A cell that detects damage to itself IS Processing Information Referring to itSelf (SRIP). In other words, it's conscious, according to your definition. Reductio ad absurdum.

]You know what the ultimate sign of cowardice is? Putting someone on ignore cause they don't agree with you. But you wouldn't know anything about that, would ya

PixyMisa said:
I didn't put Fedupwithfaith on ignore because he didn't agree with me. In fact, he agreed with me almost entirely. I put him on ignore because he persisted in semantic nitpicking - invalid semantic nitpicking, since he was wrong the entire time - and ad hominem attacks, rather than addressing the topic. Eventually he started addressing the topic, and we made some progress. Then he went back to invalid semantic nitpicking. Then he left.

Lol, I don't know what's sadder- that you're rewriting history or that you actually believe the rewrite.

It's not cowardice of any sort. It was just rational conservation of finite resources.

You're a waste of resources, but I've never put you on ignore (or RD or any of you SRIP fruitcakes). It was cowardice.
 
Of course.
Yep.

Assertion.
No, documented standard procedure.

Question begging: you're assuming washing machines are aware, which is exactly what we're asking you to prove.
Wrong yet again. If the washing machine can detect and report on conditions, then it is aware by definition.

This is what FedUp and the rest of us have been telling you NOT TO DO over and over.
Which is why you are wrong over and over, because I have never once done that.

Irrelevant. The code controlling the diagnostic mechanism on a car is localised and specific to a part of a microchip. The mechanism itself is localized and specific.
Wrong and irrelevant.

Ad Hoc. Self-referential information processing IS (I)nformation (P)rocessing (R)eferring to a (S)elf.
No it isn't.

Lol, I don't know what's sadder- that you're rewriting history or that you actually believe the rewrite.
You do understand that Fedupwithfaith was a computationalist, right? You do understand that this means that his position and mine are fundamentally the same? You do understand that his disagreement with me was entirely to do with semantics? (And personality, but ignoring that becuase it's irrelevant.)

You clearly either did not read or failed to understand the discussion. Since you clearly either have not read or failed to understand what I have written every single time you have responded to me, I guess that's to be expected.

You're a waste of resources, but I've never put you on ignore (or RD or any of you SRIP fruitcakes).
Naturally. It wastes little resource to make an irrelevant response to a well-considered and well-supported post. It wastes considerable resource to make a well-considered and well-supported response to a irrelevant post. That's why the "Gish Gallop" is successful (for a very limited definition of success) in a live debate.

Doesn't work so well in a forum, of course. What puzzles me is why you persist in that approach regardless.
It was cowardice.
You are confused.
 
Last edited:
Status
Not open for further replies.

Back
Top Bottom