• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Explain consciousness to the layman.

Status
Not open for further replies.
And your opinions about my conversations with westprog aren't of any particular interest to me.

Well that is too bad, because if those conversations are about me, and you are having them in front of me, and yet you aren't interested in how I feel about them, it suggests that you have no interest in maintaining even the merest cordiality that would be required to have a discussion with me.
 
My truck depends on phenomena such as pressure and spark to operate.

Vaporize the thing, and they're gone. I mean really gone.

After attempting to explain the premise to you no less than 4 times, the fact that you honestly think I am not aware of such a bleed'n obvious issue sort of suggests to me that there is a communication problem between us that just can't be overcome unless I was standing in front of you with a whiteboard.

Since that will never happen, I guess we are at an impasse? I will leave you to focus on responses to yy2bgggg's more clearly structured questions.
 
Well the highlighted part is ambiguous and unproven;


Why the reluctance to acknowledge that ‘consciousness cannot be achieved by programming alone’ (aka: the highlighted part) ?

On the surface, the only activity thus far associated with consciousness, human activity, bears some resemblance to computer activity (probably in no small part because human beings program the things to behave in ways that are intelligible to us). Thus it is easy to anthropomorphize. IOW…computers can do human-like things and this is how these achievements are accomplished (programming)…therefore it must be reasonable to assume that human behavior is accomplished the same way. And not only that…from a simplistic POV…brain activity can be described in terms of information processing…and isn’t that exactly what computers are all about.

Consciousness is not a defined term (nor is it’s mechanism understood), thus it is difficult to definitively establish that the above connection is untenable (…’prove that it’s not possible!... you can’t, therefore it must be at least possibly possible…especially given all the obvious reasons to believe so!’…). This thread is a good example of those difficulties (what is information processing, what is computation, what is function, what is subjective, what is objective, etc. etc…..philosophy gets a good workout).

There is the religious belief that anything that exists can be explained by science (a somewhat limited epistemology the limits of which are often completely ignored by those who practice it). Religious beliefs are often easier to subscribe to than facts. Thus Chomsky and his pessimism over a scientific understanding of human nature do not get the publicity that Dawkins and his bubblegum understanding of human nature achieve. It has been suggested that the noble ship of science may finally break upon the treacherous shoals of consciousness. ‘Impossible’ is not a word that can exist in the religion of science.

When it comes to ‘consciousness’…there is the inevitable need to believe that what we are is a consequence of something intelligible. Integration functions, information processing, neural networks, etc. All of these, of course, are intelligible to a degree (and, ultimately, as unintelligible as everything else)…but it is what all of these things combine to create that is the relevant issue. What is interesting is the search for an objective description of something that exists entirely as a subjective experience (Ontemology....or perhaps Epistology). There is the ‘what creates it’ that is the domain of the neuroscientists…and then there is the ‘what the hell is the ‘it’ that is created’….?

Everyone may have been justifiably annoyed when Pixy insisted that SRIP created consciousness so consciousness was SRIP…but does anyone, in fact, have any idea what it is that is created?

Consciousness is the answer to the question: what does it mean to be you? ;)

I doubt that it is a coincidence that neither science nor any (known) human being can answer that question (unconditionally true…but isn’t ignorance bliss?). Until one, or the other, does…we simply cannot claim to have a definition of consciousness.

...which suggests an interesting question: who would understand my definition (if I had one)? What is understanding anyway? (very happy SRIP :D)

A while back I pointed out that one of the defining features of consciousness is its ability to self-adjudicate…to evaluate the propriety and / or authenticity of its own existence and respond accordingly (or not, as the case may be). The objective, it would seem, of this thing we call ‘consciousness’ is to achieve an accurate rendition of itself. To ‘know’ itself.

Ever met a computer that felt the need to ‘know itself’? :p

…and now I have to go and drown in profundity. You did, if you recall Piggy, ask why this question constantly ends up at the R & P section rather than the science section? It’s cause something exists…us [consciousness…blorp…whatever]…and we don’t know what we’re talking about when we talk about it…which is fine, if we’re talking about mollusks…but not so fine if we’re talking about talking. :boggled:

…cue famous phrase: “ we use words so we can avoid having to face the fact that we don’t know what we’re talking about! “ :eye-poppi

(…three jelly-beans to anyone who can identify the author [those familiar with the author are excluded from the competition]) :confused:
 
My paraphrase seems to be what you are actually claiming. You said not worthy of study, which I interpreted as "boring". You said that people who do find consciousness worthy of study tend to gravitate to magic beans. I can't see any other interpretation. The implication seems to be that the only way to find out the truth about consciousness is not to study it.
Good; I wanted to make sure none of your rebuttal was hyperbole, but all heartfelt. In fact both sets of people I describe were neurobiologists. Neurobiologists who think consciousness is a largely illusionary emergent network effect are unlikely to describe their research as studying consciousness, the way they might say they're studying ion channels, or Alzheimer's. They'd probably say they're just studying how the brain works. Oh, they might say they're studying the effects of something on consciousness, but that's likely to be either a layman's terms simplification or grant-friendly verbiage (because where money talks, ******** walks).

Neurobiologists who think there is something concrete behind the subjective experience of consciousness, on the other hand, are much likely to say outright that they're trying to find it. Like any scientist, generally they'll excitedly follow up on what they think it might be regardless of whether or not you care.

Contrary to the impression this thread might give a third party, there is more than enough room in science, even biology, even neurobiology, for both camps to exist comfortably.

There's not enough room in systems neurobiology, though, because by that point you're down to like one small conference or a single aisle of posters out in the GGGhetto at SfN where everyone has to face each other eventually or it'll be totally obvious you're just trying to avoid tough questions, and everyone gets picked to review everyone else's paper and you just KNOW that bastard's getting you back for that scathing burn about his experiment's interpretation at his last talk, etc.

I don't know of anyone publishing who subscribes to the theory that it's not a behavior of the brain, meaning the result of the brain's physical-energetic functioning.

In other words, although it's a very different phenomenon from any other sort of phenomenon we know of, it is a spacetime event, and it is therefore caused by the same interactions of matter and energy that cause all other events.

Where you might find room for a "magic bean" in that is beyond me.

Unlike the computational literalists, we do not add a redundant computational layer to the physical computations (much less imagine that it can create real worlds). We simply stop with the physical computations.

The comp.lits say brain -> logic -> consciousness.

We say brain -> consciousness.

And the idea that there are two valid camps here is a bit ridiculous, when only one camp is actually studying the phenomenon. (The other camps are working with non-conscious objects and/or abstractions, and while both endeavors are essential to brain research, they are only useful if they are based on observation of the brain and confirmed against observation of the brain.)

The neurobio definition of consciousness is the "right" one because it is useful and productive. You can use it to design experiments on live conscious brains, which get measurable results. You can use these results to design further experiments and make progress by testing and rejecting hypotheses.

PixyMisa's definition is "wrong" because it cannot be used to design any useful experiments that tell us anything... or even to understand existing experimental results.
There's a lot of problems with this post. The big one being that you're begging the question that consciousness even exists as an objectively quantifiable thing. Then there's the appeal to authority, strawmen, etc.

Seeing as my few days of actually having to get **** done saw the thread sprout an extra half-dozen pages, each with their own meta-discussion baggage that any earnest response would need to take into consideration, I feel pointing them all out would be shooting the horse after leaving the barn, or some other such witty colloquialism that concludes with me not having to really give a damn without being quite so rude as to say I don't give a damn.
 
He makes mistakes that no human being would make.

That's mainly for two reasons.

The first is that Watson doesn't understand categories.

Categories proved to be useless because it was impossible to tell from a category anything about the likelihood of a given answer being right or wrong.

For example, the answer to a question in "American Presidents" could be the name of a war, or a country, or an animal, or part of a quotation about almost anything, or a number, or whatever.

The second is that Watson also doesn't understand the questions.

Watson uses some basic grammar rules to guess parts of speech, along with some of its own Jeopardy!-specific rules such as the significance of the word "this", and then does a kind of a Chinese Room act where he gets what most commonly goes along with the elements comprising the question.

Based on the sources and hits Watson calculates the probability of a given answer being right, along with the financial risks of wrong answers and benefits of right ones, and decides whether to ring in.

Here's one of Watson's errors:

Clue: It was this anatomical oddity of US gymnast George Eyser.

Watson: What is leg?


Eyser was missing a leg, but since Watson didn't understand the category or question, he made a mistake that no human would have made, even if s/he had no idea what the actual answer was and simply made a stab in the dark.


Watson's final Jeopardy fumble is wonderfully non-human:

Category: U.S. Cities

Clue: Its largest airport is named for a World War II hero; its second largest, for a World War II battle.

Watson: What is Toronto?

Pattern matching was what electronic computers were first designed to do - even before mathematical calculation. When it works, it gives the impression that there's something there thinking and understanding. Then things like the above demonstrate that there's nobody home.

It's possible to believe that if you just have enough clever pattern matching, happening fast enough, that it will somehow learn to understand.

Of course, it's possible that all that the human brain is doing is pattern matching. Certainly human beings are very good at it. If computers get to do all the things that humans do, doing pattern matching, then that's evidence in favour of that conjecture.
 
Interesting and fast moving thread, but to contribute to the idea of "impossibility", and the advancement of science...(no replies - this is a side comment)

westprog said:
Of course there are doubts in any engineering project, but if they told their managers "I don't know why we're working on this, it's clearly impossible" I'd be surprised.

I think you assume that engineering managers are not engineers with engineering ability and foresight. As such a manager, I would direct a project start before the completion of the research and development (guessing on key components). In one case, a concept had been sold by the owner (an engineer), but initial tests bombed. He gave it to me to deliver in the time frame. Necessity was the mother of invention. A patent resulted that changed the industry.

My engineers thought it impossible, although they did not say so to my face. Dilbert WOULD tell his manager, but his manager would not deterred as long as the project made money.

I proposed a networking a group of personal computers when networking and PC's were still in their infancy. I had an engineering CEO from another company tell my CEO (while I was present) that such an idea was "pie in the sky" and would not work. My CEO believed me, and soon we had customized networked accounting system using Foxpro, replacing the IBM mini-computer.

I am constantly surprised by what I too think is impossible, only to find that I constantly get proved wrong.

But there are limits. I was asked to consult on what I considered to be a perpetual motion machine. I declined. Many did not, but accepted paying work because they were intrigued by sophisticated testing showing that the output was higher than the input - the difference supposedly extracted from some "ether".
 
He makes mistakes that no human being would make.

I think you're just assuming your conclusion, here. Note that I don't think Watson is conscious. But if you don't think he is conscious because he doesn't act like a human, then you're either going with the behavioural approach to consciousness, which I think you said you didn't, or you are assuming that only humans can have consciousness, which I know you said you didn't.

The first is that Watson doesn't understand categories.

Understanding categories is a necessary condition to consciousness ?

The second is that Watson also doesn't understand the questions.

So ?

Based on the sources and hits Watson calculates the probability of a given answer being right, along with the financial risks of wrong answers and benefits of right ones, and decides whether to ring in.

Don't we all ?

Eyser was missing a leg, but since Watson didn't understand the category or question, he made a mistake that no human would have made, even if s/he had no idea what the actual answer was and simply made a stab in the dark.

So the fact that he didn't just shut up means he's not conscious ? I really don't understand how this follows from any of the things you mention.
 
How might they relate?
Remember the moving of the goalposts? those developments challenged the meaning(s) of intelligence, and what was thought to be special about human intelligence. I'm suggesting similar gains in the understanding of consciousness and clarification of just what we mean by it may come from attempts to produce machine consciousness.
 
Remember the moving of the goalposts? those developments challenged the meaning(s) of intelligence, and what was thought to be special about human intelligence. I'm suggesting similar gains in the understanding of consciousness and clarification of just what we mean by it may come from attempts to produce machine consciousness.


I don't think anyone would deny that. Advances in science and knowledge come from RESEARCH and development. So I doubt you will find anyone here that would oppose that.

However, there is a big difference between saying what you just said and saying that we ALREADY have achieved it and the R&D is over folks.... and to claim that it is so mundane that it did not warrant even the slightest acknowledgement.

Also..."it may come from attempts to produce machine consciousness" is absolutely right...but it does not mean that it will be by programming a computer. Computer programs and simulations would be great TOOLS and INSTRUMENTS in the research.....just like we use computers to calculate the Finite Element Model (FEM) analysis for strains and stresses caused by air flow patterns on a simulated new airframe design.

When we use the computer and computer programs to research consciousness we are using a tool to aid us in simulating things and trying out ideas, just like we might use paper and pen to draw graphs and jot down calculations. But it does not mean that the programs are going to become conscious any more than the stresses on the simulated airframe are going to become actual stresses, or the diagrams we draw on a paper are going to jump out of the paper.

We need a real airplane frame to interact with real air flowing over its frame to produce the real stresses that were simulated in the computer. Likewise, to finally achieve real consciousness in a machine that actually does think, we will need a REAL MACHINE that has components that would duplicate the physical time and space interaction of matter similar to whatever it is that occurs in a brain to cause real consciousness. For example a real physical Neural Network perhaps…. a POSETRONIC brain if you will for people with fondness for SciFi (or is it SyFy these days?) :D … whatever this posetronic matter may be.

A machine that achieves consciousness is doing so because of the way matter interacts to produce consciousness…… there is REAL MATTER doing REAL PHYSICS to produce consciousness… and it does so autonomously…. No one set down the rules for how the physical matter will interact... it just behaves according to the laws of physics.

A computer running a program is a PUPPET, it is a REMOTELY CONTROLLED object. I doubt it will ever produce consciousness because there is no matter interacting and whatever “interaction” occurs is manipulation of symbols (not real matter) according to a SET PROCEDURE that a programmer designed.
 
Last edited:
Which is interesting, because the only way human memory works is by association i.e. pattern matching.

As I said, pattern matching is one of the things human beings are very good at. They are good at other things as well.
 
Yet, you have been claiming from day 1 that software could not be conscious unless it interacts with an external frame of reference.
And as Pixy continues to assert, interaction with external frames of reference is how consciousness in others is observed and so assigned. Now what?
 
So... with your magical machine... I vaporize my truck as it's en route to the composter, and although it vanishes -- and in fact, the space it occupied is now literally a void because particles cannot move in to fill the space it left -- the load of brush keeps moving down the road, and light keeps reflecting off the void as if the truck were there, and the void emits sounds as if the truck were there, and if the void runs over a possum, the possum dies....
Well, yeah, since he just clarified that everything else having anything to do with the truck has likewise been expanded.

At this point, I have to ask you what in the world you're getting at with all this.
At least he answered the question I haven't asked; "Where in that diffuse cloud lightyears in volume (of what was a truck) is the truck located?".

A strange and imo useless hypothetical in any case.
 
Well that is too bad, because if those conversations are about me, and you are having them in front of me, and yet you aren't interested in how I feel about them, it suggests that you have no interest in maintaining even the merest cordiality that would be required to have a discussion with me.

Folks in this thread have discussed my posts among themselves and it doesn't bother me, even if they disagree with me.

And I feel free to discuss anyone else's posts with anybody I care to.

If you want to respond to something I've discussed, then fine, we can discuss that.

But your opinion that I'm simply having too much chatter with Westprog, no, that opinion doesn't interest me. Nor should it.
 
After attempting to explain the premise to you no less than 4 times, the fact that you honestly think I am not aware of such a bleed'n obvious issue sort of suggests to me that there is a communication problem between us that just can't be overcome unless I was standing in front of you with a whiteboard.

The fact that I disagree with you doesn't mean I don't understand you.

You continue to treat logical computations and physical computations as equivalent, yet we know they are not.

You also presume that all phenomena are driven entirely by particle-level interactions, and while this was a very popular assumption for a long time, it has never been proven and is currently in very serious doubt.

For instance, look at ice on a tree branch causing it to break.

If you use your magic machine to strew that system across the universe, the particles can still interact with one another as before and yet there will no longer be the weight of ice upon a branch, so the branch will not break. The system will run differently.

You want to ignore the physical reality at our level of magnification and simply assume it doesn't matter, and that you can free the particles from their confines and expect the particle interactions to continue running the show as before.

You complain that I'm telling you things which are "obvious" and yet you don't even attempt to account for them.

Worse, you appeal to irrelevancies such as relativistic effects at near light speed, which have nothing to do with what we're talking about. (ETA: Yes, I know the point you were trying to make... it's just that your example fails to actually illustrate that point.)

Bottom line: As far as we know, you need a working brain (or some real equivalent) doing real work in spacetime as a brain, all in one place, to make consciousness happen.

Nothing about relativity or QM changes that.

Your thought experiment is badly formed, I'm afraid.
 
Last edited:
Remember the moving of the goalposts? those developments challenged the meaning(s) of intelligence, and what was thought to be special about human intelligence. I'm suggesting similar gains in the understanding of consciousness and clarification of just what we mean by it may come from attempts to produce machine consciousness.

Oh, well, certainly.

But machine consciousness != computer consciousness.

ETA: I will say, however, the discoveries in the field of AI have done consciousness studies a great service by helping to define what consciousness is not. Which is frustrating... it would be more satisfying to have it provide a positive answer instead... but it does help us make progress.
 
Last edited:
Status
Not open for further replies.

Back
Top Bottom