• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Explain consciousness to the layman.

Status
Not open for further replies.
If that was all that anyone did say, we probably would have just said "dunno" and left it at that. However, there are people who claim that consciousness is computational, and that a sufficiently large computer running the right program would definitely be conscious, and that all alternative theories are magical in nature, and that this is mathematically provable and proven.

I can see why there's an argument between the people claiming we do know, and the people who claim we don't. What I don't get is the people who admit we don't know, but who are still siding with the people who claim we do.



Not even that...... they are claiming that normal modern computers running programs that can be made by almost anybody ARE CURRENTLY conscious.

Not that they MIGHT become one day with additional power in hardware and software.....no....they claim that there are many such computers that are currently conscious due to running easily coded programs.

And the claim goes even further to assert that all this is so mundane to have not warranted any kind of "remarkable" acknowledgement by even the scientists whose professional careers are to seek such an achievement and are still trying not realizing that it has already been done.
 
Last edited:
westprog said:
We know that illusions exist, but they exist in terms of our conscious interaction with the world. For example, a person can have a pain relating to a missing limb. The pain might be illusory in its location and cause, but the experience of the pain can't be an illusion.

If we had a specific definition and concept of what consciousness is, then we might have the opportunity to get it wrong - but we don't.
Yes, pain is experiencing, no illusion there. The experience is always real, what it feels like it corresponds to might however not be. Or the actual underlying nature it corresponds to might be vastly different than what we intuitively think it is.

We experience so we say that we must have consciousness. But can we honestly say that we are perceiving consciousness? And if not, then what are we actually referring to when we’re referring to consciousness rather than to experiencing. Are they simply synonyms? Or is there something other than experiencing implied with consciousness?
 
But then you have to also take into consideration that the preponderance of evidence is in favor of REALITY as it stands here and now and not conjectured SciFi.
The reality is that we've found nothing so far about the function of the brain which can't be emulated fairly directly. The bits of functionality that seem most vital are even easier to simulate if you cut enough corners, though I personally doubt we know enough at this point to cut the right ones.

The conjecture is that we will, at some point, encounter something absolutely vital to intelligent thought that absolutely cannot be recreated computationally. That's the magic bean. No one here has any idea what that bean might be, how it might not be computationally recreatable, or what it might contribute to consciousness, but they insist that allowance be made in the hypothesis to accomodate its possibility. Like I mentioned before you even joined the thread, they are arguing for a gap to shove a god into.

And who knows, maybe there is a magic bean after all. But I'm not going to hold my breath for it, nor walk on eggshells for fear of its discovery.
 
If that was all that anyone did say, we probably would have just said "dunno" and left it at that.
You don't know many scientists, do you? "I guess it's just ineffable" doesn't often sit well with them.

However, there are people who claim that consciousness is computational, and that a sufficiently large computer running the right program would definitely be conscious, and that all alternative theories are magical in nature, and that this is mathematically provable and proven. They claim that if a process is computationally equivalent to another process, then it is functionally equivalent as well.

I can see why there's an argument between the people claiming we do know, and the people who claim we don't. What I don't get is the people who admit we don't know, but who are still siding with the people who claim we do.
What exactly do you think postulating means? It appears at the moment (barring magic beans), that consciousness is computational. Therefore a sufficiently large computer running the right program would definitely be conscious, and if such a process were computationally equivalent to another process, then it is functionally equivalent (i.e. conscious) as well.

But all of that could be wrong. There could be a magic bean. But so far there ain't one.
 
Not even that...... they are claiming that normal modern computers running programs that can be made by almost anybody are currently conscious.
The only one claiming this is PixyMisa, and he's working under an operational definitionWP of consciousness.
And the claim goes even further to assert that all this is so mundane to have not warranted any kind of "remarkable" acknowledgement by even the scientists whose professional careers are to seek such an achievement and are still trying not realizing that it has already been done.
Well, yeah. The claim that machines are conscious under PixyMisa's operational definitionWP of consciousness is mundane.

I agree that it is. Isn't your whole gripe that you think that it is?
 
Yes, pain is experiencing, no illusion there. The experience is always real, what it feels like it corresponds to might however not be. Or the actual underlying nature it corresponds to might be vastly different than what we intuitively think it is.

We experience so we say that we must have consciousness. But can we honestly say that we are perceiving consciousness? And if not, then what are we actually referring to when we’re referring to consciousness rather than to experiencing. Are they simply synonyms? Or is there something other than experiencing implied with consciousness?

Exactly what consciousness is remains illusive. I am wary of making any attempts to restrict or define it - but anyway - consciousness could be said to be perceiving. Among the things we perceive is the fact that we are perceiving.
 
You don't know many scientists, do you? "I guess it's just ineffable" doesn't often sit well with them.


What exactly do you think postulating means? It appears at the moment (barring magic beans), that consciousness is computational. Therefore a sufficiently large computer running the right program would definitely be conscious, and if such a process were computationally equivalent to another process, then it is functionally equivalent (i.e. conscious) as well.

But all of that could be wrong. There could be a magic bean. But so far there ain't one.

Thank you for pointing out the area of disagreement, which I have highlighted. Clearly, I, Piggy, Leumas, etc, don't agree that consciousness appears to be computational. Nor do we agree that an alternative theory which attaches consciousness to a specific physical process is necessarily magical. (Piggy, Leumas etc can of course indicate if I have misrepresented their views).

I would add that I don't think that the computational view, as presented, is coherent as a physical theory.
 
Heh, I think being a professional A.I. programmer with more than the equivalent of a molecular biology minor qualifies me as "knowing enough" to discuss this issue without needing to "go read about NNs and computer programming and also about how the brain works."


So with all this impressive knowledge and credentials why did you not contradict the assertion made by PixyMisa when he said
at least some of the applications typically found on a modern computer are conscious.

Or do you perhaps agree with his assertion? Do you too think that there are typical modern computers that are currently going about being conscious?

With all your impressive AI experience, have you achieved what PixyMisa claims to have achieved when he said
Yes, absolutely. I've written such programs myself. It's a common programming technique.

And if you did not, then have you tried the "common programming technique" that apparently evaded you with all that AI programming experience. Have you had any success yet?

Have you duplicated his findings? If not why not… I mean you have all that AI experience…so why not?

If you did not post any objection to the above claims, could one construe that as a tacit agreement with the assertions since you have been a very active participant to the thread which aims at explaining consciousness to the layman?

Don’t you think that a layman who reads that any programmer can make any computer become conscious would be quite misguided by the assertions and maybe you have some responsibility, from you vantage point as an experienced AI programmer and a passionate contributor to this thread, to at least qualify the assertions?

Don't you think that you owe it to the readers of this thread and to the contributors who oppose the assertions to either support the assertions or deny them or if you would do neither, then to excuse yourself from contributing any further since you disqualify yourself by refusing to, at the very least, qualify your stance on a matter of “remarkable” import.

Or perhaps you do not think that consciousness in a typical modern computer is in any way a remarkable assertion? How would you respond to this question
PixyMisa said:
do you think computer consciousness is in any way remarkable?

And if you respond with the negative.... then have you done it with all this AI experience you have accumulated? If not why not given that it is obviously such a "common programming technique"?


And if you do think that it is a remarkable achievement given the fact that neither you nor anyone else has so far achieved it, why did you not offer a response to Pixy's question?

If you do not think that computers are already conscious due to common programming techniques then why have you not posted a response to the effect?

I now put the questions to you again just in case you might have missed them before and that is why you have not offered a response:
Do you think that there are currently “typical modern computers that are conscious” due to running programs that are utilizing “a common programming technique” that any programmer can code? If not.... then why have you not said so in response to the claims of Pixy Misa?


Do you think that such an achievement would set a “remarkable” milestone in the field of Artificial Intelligence (AI) at the very least? If yes.... then why have you not said so in response to the claims of Pixy Misa? If not....then why not?
 
Last edited:
I agree that it is. Isn't your whole gripe that you think that it is?


It is not a "gripe"..... it is an objection.... look up the words in a dictionary if you do not quite know the difference.

My objection is not to the fact that under his definition it is not remarkable. My objection is to his definition which renders it not remarkable.

I put the question to you:
Do you think achieving consciousness in a computer would be a remarkable milestone in the field of AI and other fields of science? I am not talking about Pixy's definition…. I am talking about people who work and research in these fields…….do you think that achieving conscious computer programs is a remarkable event or not? What about in your opinion?

Do you think that what Pixy calls conscious computers are in fact conscious? I am not talking according to his operational definition.... I am asking IN YOUR OPINION according to YOUR operational definition..... do you think there are currently conscious computers?


The only one claiming this is PixyMisa, and he's working under an operational definitionWP of consciousness.

Well, yeah. The claim that machines are conscious under PixyMisa's operational definitionWP of consciousness is mundane.
I agree that it is. Isn't your whole gripe that you think that it is?


If I formulate an "operational definition" that enables me to make a claim that I am bodily flying every time I hop a few feet off the ground and then based on that “operational definition” claim that people are flying all the time and it is a very simple technique to do so...... would YOU accept my assertions?

If I made such claims in a post here on JREF would you DEFEND me when people start objecting to my claims? Would you say that you agree with my claim that I fly and that it is not so remarkable that a person can fly unaided by just hopping off the ground because my definition justifies it?


Yes sure..... Pixy made an unscientific “operational technique” that enables him to make unscientific assertions based upon it.

Yes sure.... he is consistent about it.

But does his consistency of being wrong render his unscientific claims all of a sudden scientific?

If I were to make some bizarre “operational definition” that renders fireflies as pixy fairies…. Does that give me a license to then claim that I have seen fairies and interacted with them and they are common everywhere? Would you object or would you agree with my claims….would you defend my consistency while claiming there are fairies?


If I were to formulate an "operational definition" that renders a mountain conscious and accordingly claim that miners are GUTTING the poor thing.... would that be acceptable you think?

If I formulate an "operational definition" that excludes say some Brazilian natives from being recognized as conscious and thus justifying taking some of them as specimens for a Zoo.... would that be acceptable you think?

Don't you think that an "operational definition" should be done on SCIENTIFIC grounds not just any Tom Dick and Harry’s definition? Don't you think that the "operational definition" should be at the very least accepted by Neuroscientists whose SCIENTIFIC career is to formulate such a definition? And if not then at the very least discuss it with them and show them why they are wrong?
 
Last edited:
If I formulate an "operational definition" that enables me to make a claim that I am bodily flying every time I hop a few feet off the ground and then based on that “operational definition” claim that people are flying all the time and it is a very simple technique to do so...... would YOU accept my assertions?

[...]

If I were to make some bizarre “operational definition” that renders fireflies as pixy fairies…. Does that give me a license to then claim that I have seen fairies and interacted with them and they are common everywhere? Would you object or would you agree with my claims….would you defend my consistency while claiming there are fairies?


If I were to formulate an "operational definition" that renders a mountain conscious and accordingly claim that miners are GUTTING the poor thing.... would that be acceptable you think?

If I formulate an "operational definition" that excludes say some Brazilian natives from being recognized as conscious and thus justifying taking some of them as specimens for a Zoo.... would that be acceptable you think?
That's exactly what you've been doing. My principal objection to your side's argument is that it all rests on the use of an undefined concept. You can't say what consciousness is, yet you're rejecting one definition because it's not "remarkable" enough to suit your tastes. Piggy can't say what consciousness is, but he's perfectly comfortable ranking this or that into different grades of consciousness because he thinks he knows it when he sees it.

You each already have your own operational definitions, which are just as poorly justified as Pixy's. Only Pixy acknowledges his definition is merely operational, and thus doesn't have to try and justify its use in all theoretical instances.
 
So with all this impressive knowledge and credentials
I don't consider my knowledge or credentials particularly impressive. However neither do I consider it necessary for me to "read a book" about any of these topics before I can answer you, because I have already been educated in that respect. Which was the only point of me bringing up my background -- to let you know you can omit the snarky comments like "read a book," at least in posts directed at myself.

If you did not post any objection to the above claims, could one construe that as a tacit agreement with the assertions since you have been a very active participant to the thread which aims at explaining consciousness to the layman?

Yes, one could construe that, and they would be at least partially correct.

Don’t you think that a layman who reads that any programmer can make any computer become conscious would be quite misguided by the assertions and maybe you have some responsibility, from you vantage point as an experienced AI programmer and a passionate contributor to this thread, to at least qualify the assertions?

They have been qualified, though. It is unfortunate that to see it requires re-reading certain posts made long ago, although I think pixy does a good job of refreshing people on what he considers "consciousness" every few pages anyway.

Don't you think that you owe it to the readers of this thread and to the contributors who oppose the assertions to either support the assertions or deny them or if you would do neither, then to excuse yourself from contributing any further since you disqualify yourself by refusing to, at the very least, qualify your stance on a matter of “remarkable” import.

Yes, I do think I owe it to them. However you need to realize that I have been specific about my position regarding pixy's assertions many times, perhaps just not in this particular thread. Thus in my mind I *have* qualified my stance.

I was trying to move on to different aspects of the discussion in this particular thread, thats why I haven't gotten into the discussions pixy is having.


Do you think that there are currently “typical modern computers that are conscious” due to running programs that are utilizing “a common programming technique” that any programmer can code? If not.... then why have you not said so in response to the claims of Pixy Misa?

If I replace "conscious" with pixy's definition, and "common technique" with what the technique actually is, then you get this:

"Do you think that there are currently “typical modern computers that exhibit self referential information processing” due to running programs that are utilizing “reflection, or 'self-reference'” that any programmer can code?"

To that I definitely answer yes.

If I replace the phrase in contention with what perhaps you think they *should* be, rather than what pixy qualified his assertions with, then you get this:

"Do you think that there are currently “typical modern computers that exhibit mamallian subjective experience, mamallian awareness of self and the environment, and mamallian memory capacity ” due to running programs that are utilizing “reflection, or 'self-reference'” that any programmer can code?"

To that I definitely answer no.

So what is the issue?

Do you think that such an achievement would set a “remarkable” milestone in the field of Artificial Intelligence (AI) at the very least? If yes.... then why have you not said so in response to the claims of Pixy Misa? If not....then why not?

I think constructing a program that processes information similar to the ways mamallian brains process information would be a remarkable milestone, yes.

And I don't mean a naive simulation of an entire rat brain, which will happen within a few years. I mean actually understanding how the information flows and building such a thing from scratch. In fact I hope to own one of the first companies to do this.

I don't think what pixy is talking about is remarkable in the least, because he is merely speaking of self-referential information processing.

In pixy's defense, I think he brings up his very simple operational definition because when one tries to reduce many of the aspects we attribute to our mamallian consciousness, they can be broken down into what is conceptually the simple idea of self referential information processing that just references a TON of information.

Granted, that is like saying "switching" when someone asks "how does a computer work?" but in truth it isn't incorrect, it is just monumentally simplistic. Because a computer really does work by switching. Likewise consciousness is a type of self referential information processing. That isn't a full explanation, nor particularly useful if one is trying to understand something in detail, but it is certainly a correct explanation.
 
Don't you think that an "operational definition" should be done on SCIENTIFIC grounds not just any Tom Dick and Harry’s definition? Don't you think that the "operational definition" should be at the very least accepted by Neuroscientists whose SCIENTIFIC career is to formulate such a definition? And if not then at the very least discuss it with them and show them why they are wrong?

But the operational definition pixy uses happens to be useful, and it is based in science.

The fact is, we can look at any system and ask "is it processing information?" and if the answer is no, we can be absolutely 100% sure it is not conscious.

We can further look at any system and ask "is it self-referencing?" and if the answer is no, we can be absolutely 100% sure it is not conscious.

So at the very least such an operational definition allows us to reject 99% of the universe as not being conscious, and furthermore the partitioning is 100% correct in at least one direction -- based on those two constraints alone, and assuming we are accurate in applying those constraints, there is no system we might consider non-conscious that could turn out to be conscious.
 
That's exactly what you've been doing. My principal objection to your side's argument is that it all rests on the use of an undefined concept. You can't say what consciousness is, yet you're rejecting one definition because it's not "remarkable" enough to suit your tastes. Piggy can't say what consciousness is, but he's perfectly comfortable ranking this or that into different grades of consciousness because he thinks he knows it when he sees it.

You each already have your own operational definitions, which are just as poorly justified as Pixy's. Only Pixy acknowledges his definition is merely operational, and thus doesn't have to try and justify its use in all theoretical instances.


If one claims to know and then his hypothesis leads to WRONG RESULTS then his claim to knowing does not become more valid than a person who claims to not know or a person who postulates a hypothesis that is not yet verified or negated.

Just because one claims to know while others claim to not know does not make his claim valid. This is one of the underlying concepts of the scientific method.

Also when one makes a hypothesis and it cannot be verified one way or another it does not make it less valid than a hypothesis that has been verified to be invalid because its application produces invalid results. In fact the unverified one is more valuable. Because a hypothesis that might work is more valuable than one that is known to not work. The hypothesis that is now verified to not work did serve a purpose and that is to scratch off one of the many possibilities. However, the hypothesis that has not yet been verified is still valuable because it quantifies a possibility that may have escaped our attention in the quest for a working solution.

Consider this.... we have persons A, B, C and D..... they are discussing a way to make the object O.

A says it should be made by X-method......so they try it..... he turns out to be wrong and X-method does not make O.

B says I propose method Y..... but unfortunately we do not know yet how to do Y.

C says no..no... I propose method Z but me too I am not sure how to do Z.


Now.... we have the facts that are
  • X definitely does not work
  • Y and Z are possibilities but we have no idea if either or both or neither would work.

So here is the question to you.......which is the valid method among X, Y and Z?

Which method would you rally behind and support? X that is known to fail or Y or Z that no one knows either way?

Now D starts calling B and C cooks for not agreeing with A and calls them magic bean holders for not agreeing with him. D's support for A is based upon a passionate pleasure of him being the only one definitely sure of a "solution" regardless of whether it works or not while the others do not have any working solutions.

How rational are D and A for defending X despite having all the proofs that it is not a functional solution.


You may disagree and claim that in fact X works….. So now the questions to you are
Do you think that there are currently “typical modern computers that are conscious”?

Do you think that such an achievement would not in way be a remarkable milestone in science?
 
I admit, that was a problem with your strawman interpretation of my scenario.

However it wasn't actually a problem with *my* scenario, because I clearly stated that a premise of *my* scenario was maintaining entirely equivalent behavior via a magical machine.

But the machine's magic, which you described, did not do that.

The only way to make the new system "entirely equivalent" is to produce a replica.

You preserved the interactions of the particles, but as the icy branch example demonstrates, this does not mean your system will behave the same as the original.

The scenario you describe simply does not do what you claim it will do.

End of story.
 
Then how can you say this:

Simple. Although imagination is "real", it takes place in the brain.

Once you start saying that the things imagined have some sort of real existence outside the brain -- which is what you're doing when you grant logical computations the same status as physical ones -- you end up with radical errors in your conclusions.
 
I believe you have been. I don't think anyone has said that neurobiology, or the computation of intelligence, is in any way a solved problem. You keep asserting they have, though.

This is a 180 of what I've been saying, so I really can't respond to it.

Or, at least, it would be if you didn't conflate intelligence and consciousness, so with that error added in, I'm really at a loss.
 
To the extent that piggy claims we have asserted to have "solved the problem" it is only in the context of our assertions that the above idea is sound regardless of what "acting like a neuron" entails, which I would agree with -- we certainly have asserted that.

To my knowledge, only PixyMisa makes claims that the issue is solved (by SRIP).
 
Computations are, necessarily, physical. What do you mean by this ?

Not according to the computational literalists on this thread.

They claim, for instance, that the logical computations are actually being performed by the machine.

In other words, the physical computations actually carry out the logical computations in some objectively real way which does not involve the brain states of programmers and users.

That's where their "world of the simulation" argument comes in, and that's how they justify the notion that you can replace your brain with a machine running a sim of a brain, despite the physical work of the machine being radically different from the physical work of the brain.

But this is, of course, ridiculous.

It's tantamount to saying that when Olivier is on the stage playing Hamlet, not only is a 20th century British actor on the stage, but a 13th century Danish prince is on the stage as well. When actually, the Danish prince is only "real" as brain states in the minds of the audience.
 
None of this tells us whether it is conscious. And none of this answers my question: how could you tell if it is ?

Like I said, it wasn't built to be conscious, so it's not.

I'm willing to ignore the chances that it is conscious accidentally, for the same reason I'm willing to ignore the chances that we could accidentally build a machine capable of running a 100 yard dash.

So at the moment, there's no need to develop tests for whether or not machines are conscious.

We'll be happy when we know enough about the brain to determine objectively whether other animals are conscious.

ETA: We can now do this for patients in vegetative states, under favorable conditions (e.g. hearing not damaged) but at the moment it depends on language communication (one way, of course).
 
Last edited:
Status
Not open for further replies.

Back
Top Bottom