• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Explain consciousness to the layman.

Status
Not open for further replies.
How does acknowledging this remove the ‘specialness’ when a)…there is no clear understanding of what ‘material process’ even means (did you hear…the latest research has concluded that matter […presumably the metaphysical root of the word ‘material’] is informational in nature)…either metaphysically or scientifically…and b) there is no clear understanding of how consciousness is a result of it…and c) there is no clear understanding of what the ‘it’ (consciousness) is that results.

Tell me tsig…is mathematics a ‘material’ process?

No one claimed that humans are special as evinced by the inclusion of dogs and cats and even squirrels as conscious.

The subject at hand is consciousness and it is quite well repeated here that animals are conscious such as pigeons with a brain the size of hazelnut.

So no one is saying anything about humans being special. Go read the posts and you will find out.



.

Obviously one of us should read the posts.
 
Thought experiment. A US (could be any country) robot drone is given the ability to make decisions based on logical inferences from input and is programmed to do good by destroying evil. It is sent into enemy airspace where it intercepts messages so as to decide on a strike. It hears that the US is the great Satan, and that US airstrikes are resulting in civilian death. It then decides that the greater good is to change sides. Duh, the programmer did not think to put in such an obvious master instruction to forbid that - since the enemy is evil, the program says destroy evil - simple logic. Unintended consequences happen all the time.

The "PUPPET" no longer has strings. The programmer did not think that the computer would make such inferences because the programmer did not realize what sort of messages would be intercepted. The puppet has gone rogue. When it shoots a missile, it is not simulating warfare, or playing with symbols.

To prevent such an occurrence, the bot-drone's program is run through a simulator, and messages as simulated. One can predict the drones behavior IF the messages are foreseen and used as input. If unwanted and unpredictable input becomes part of the computers database that the program can act on, the results are unpredictable.

I think this is possible in today's technology, but we would like to think that the military would A) not relinquish control B) put in a destruction safety, or whatever.

But projects today are teams of specialists who make assumptions about what the other teams are doing. "But I thought you guys put in the over-ride" and "Yeah, we never thought you lot were dumb enough to let the machine "think" for itself".




Nice STORY..... you must be a SciFi writer or very avid reader of it.

Anyway..... how is that any different from a train derailing or the brakes on a car failing or the cooling machinery on a nuclear reactor malfunctioning and causing a meltdown.

Is the car, or train or nuclear plant to blame? Were they consciously trying to kill the humans ENSLAVING them?

A robot that malfunctions due to bad programming is no more conscious of its malfunction than a remote controlled drone suddenly malfunctioning and flying back to base to off load its missiles on the controller.

If a remotely controlled RC airplane loses the receiver and subsequently crashes into the windshield of a car on the Freeway causing the occupants to die.....would you think the RC plane was trying to kill these people like in Alfred Hitchcock's film Birds?

Engineered things can and often do malfunction......but we blame the engineer or the operators and not the machine that malfunctioned because we know and understand that the machine has no mind of its own.

A robot that is performing according to a badly written program is just like a car that loses hydraulic steering fluid and stops responding to the steering commands from the driver. Both contraptions are as conscious as the watches on the wrists of the humans the malfunction unhappily caused to die.
 
Last edited:
Leumas:

Patrick Steward playing pretend notwithstanding, could you explain the connection between a claim that symbolic manipulation can produce consciousness, and the notion that the claimant is denying physical processes produce consciousness, given that symbolic manipulation is a physical process?

And don't think you're justifying this by the equivalence notion. I get that objection. And that's fine.

But what your opposition is claiming cannot be said to be equivalent to what you're attributing it to, unless you are making a mistake. I get that launching frozen chickens out of a cannon at airplane windows isn't the same as flying into a stray pigeon. But, come on. That frozen chicken is as physical as it gets.

It's actually important to get these things right. Sloppy thinking is inexcusable.
 
Last edited:
Leumas said:
A robot that malfunctions due to bad programming is no more conscious of its malfunction than a remote controlled drone suddenly malfunctioning and flying back to base to off load its missiles on the controller.

I was trying to address the "puppet" aspect, not the conscious aspect. You avoid this by calling bad programming a malfunction.

But beyond simple programming: Are you saying that if a human designs a machine using transistors, it cannot become conscious, no matter how complex the machine or AI techniques used?
 
Last edited:
Every time you wake up, you prove that consciousness exists.

No I don't. The only thing my waking up proves is that I woke up.

In spite of its popularity as a theory, I've never seen the "Consciousness as an illusion" idea as making any sense at all. I don't think it's possible to put the theory in a sentence that isn't self-contradictory.

Well, we know illusions exist, so the implication seems to be something else than non-existence. It might mean that consciousness does not exist as we intuitively might think it does (perhaps as some kind of “force” in its own right or something similar).

And waking up means turning on the experience generator.

Correct in general, but it still seems to me that explaining how all that works would have to separate (A) different kinds of experiencing and (B) the systemic mechanism (the generator) as conceptually different; they might belong to different explanatory categories, where the mechanism is explained in terms of more general principles. Hence we probably have to explain them in different ways, using different levels of abstraction.
 
Well, we know illusions exist, so the implication seems to be something else than non-existence. It might mean that consciousness does not exist as we intuitively might think it does (perhaps as some kind of “force” in its own right or something similar).
Right. As the Libet experiment suggests, we think consciousness involves agency, but that may just be an illusion. It seems that agency might be an unconscious process and consciousness is just the action replay.
 
Leumas:

Patrick Steward playing pretend notwithstanding, could you explain the connection between a claim that symbolic manipulation can produce consciousness, and the notion that the claimant is denying physical processes produce consciousness, given that symbolic manipulation is a physical process?

And don't think you're justifying this by the equivalence notion. I get that objection. And that's fine.

But what your opposition is claiming cannot be said to be equivalent to what you're attributing it to, unless you are making a mistake. I get that launching frozen chickens out of a cannon at airplane windows isn't the same as flying into a stray pigeon. But, come on. That frozen chicken is as physical as it gets.

It's actually important to get these things right. Sloppy thinking is inexcusable.



Absolutely correct..... finally something we can agree upon.... so please stop doing it.
 
Keep it civil and on topic. The topic is not the other posters.
Replying to this modbox in thread will be off topic  Posted By: kmortis
 
Well, we know illusions exist, so the implication seems to be something else than non-existence. It might mean that consciousness does not exist as we intuitively might think it does (perhaps as some kind of “force” in its own right or something similar).


We know that illusions exist, but they exist in terms of our conscious interaction with the world. For example, a person can have a pain relating to a missing limb. The pain might be illusory in its location and cause, but the experience of the pain can't be an illusion.

If we had a specific definition and concept of what consciousness is, then we might have the opportunity to get it wrong - but we don't.
 
I was trying to address the "puppet" aspect, not the conscious aspect. You avoid this by calling bad programming a malfunction.


I am sure that you agree that if the pull on one of the strings of a puppet causes it to punch Judy to smithereens then it is only out of the bad design or malfunction of the string and not out of intention to actually obliterate Judy no matter how much Punch may appear to want to do so based on the history of their violent relationship.


But beyond simple programming: Are you saying that if a human designs a machine using transistors, it cannot become conscious, no matter how complex the machine or AI techniques used?



No I am not saying that..... I have repeatedly said in multiple posts that something akin to a Neural Network system as far as I know, just might be able to achieve it..... in scifi terms that you like…. a positronic brain like in Data for instance whatever positronic may turn out to be one day.

If you know what Neural Networks are made of, you would realize that there are scads of transistor in just one neural node let alone the billions that might be required. NNs are definitely “human designed machines using transistors” and I POSTULATE that they might be a candidate for achieving consciousness in a manmade machine.

So I hope you can now see that it is not what I am saying.
 
Last edited:
NNs are definitely “human designed machines using transistors” and I POSTULATE that they might be a candidate for achieving consciousness in a manmade machine.

Ultimately, since we haven't actually done it yet, this is all anyone can say. This entire thread has been about how much hand-wringing and "gee golly you guys, science doesn't know everything" apologetics are necessary to accompany said postulate.
 
No, both type take place in physical reality.

Physical computation takes place literally everywhere.

Logical computation involves (or is) changes in the state of a brain, so it's more limited in scope, but still just as real.

Ok lets just end this nonsense once and for all.

What is the significant fundamental difference between a generalized computer and the human brain that causes you think generalized computers can't be conscious?

Neurons function by integrating signals transferred by waves of ion flux across membranes. Transistors function by integrating signals transferred by waves of electron flux through a circuit. Both entities integrate signals based on changes in the flux of charged particles.

Brains don't change shape, or even connectivity, during transient thought. All the neurons stay the same, and synaptic plasticity doesn't occur on a timescale short enough for us to be able to list it as a core requirement for consciousness. The only thing that changes is the pattern of signals travelling around the network. Computers are exactly the same -- they don't change shape, and the connectivity between transistors remains static. The only thing that changes is the pattern of signals travelling around the network.

So what is so special about a brain that a computer can't do? You still haven't answered that question piggy, and it seems like the most important question of all. Fundamentally both systems operate in a very similar manner and are capable of very similar signal flow.

Furthermore if the flow of information is what is important, then why do you assert that the physical connectivity must be similar? If the information flow is equivalent between two different network layouts, who is to say which one is correct? This is tantamount to saying that a neural network built with transistors might be conscious, but one built with vacuum tubes, relays, or any other kind of switch, can't be. Why not? I don't understand what criteria you are using to judge whether a given physical device is suitable when the only metric that should matter is whether they can integrate signals based on changes in the flux of charged particles.
 
No I am not saying that..... I have repeatedly said in multiple posts that something akin to a Neural Network system as far as I know, just might be able to achieve it..... in scifi terms that you like…. a positronic brain like in Data for instance whatever positronic may turn out to be one day.

If you know what Neural Networks are made of, you would realize that there are scads of transistor in just one neural node let alone the billions that might be required. NNs are definitely “human designed machines using transistors” and I POSTULATE that they might be a candidate for achieving consciousness in a manmade machine.

So I hope you can now see that it is not what I am saying.

But it isn't logically consistent to claim that a physical neural network made of transistors could be conscious yet a simulated neural network made of transistors could not.

The only difference is the way the transistor are arranged, not in the way information is processed. The information flow in the latter is entirely isomorphic to that in the former, meaning any causal integration events are fully preserved.

To say that this difference matters is saying that something besides the ability to integrate signals might be important to the functioning of neurons -- but what would that be? We have zero evidence of any such mystery behavior.
 
Ultimately, since we haven't actually done it yet, this is all anyone can say.



I am happy to see that you agree with my post made 31 pages ago.



This entire thread has been about how much hand-wringing and "gee golly you guys, science doesn't know everything" apologetics are necessary to accompany said postulate.


But then you have to also take into consideration that the preponderance of evidence is in favor of REALITY as it stands here and now and not conjectured SciFi.



Just one thing I would like to add.... CURRENTLY there is no machine that is conscious.

Whether we can or cannot ONE DAY build one is just SPECULATION.

So if we are discussing science FICTION then why get so heated up and bothered about it and deride, abuse and malign people who do not share the same wishful and FICTIVE view about what could perhaps maybe if given the possibility of this or that, one day hopefully come to be almost as we thought it might be.


Why not admit that it is all an exercise in speculation and whatever view one may hold about the subject is much less valid than what is FACT today.

Sure.... much of the more intelligent science FICTION has become reality.....but....the fact is much more has not...and.... much of what has become reality did not do so in the same way as the fiction.

[snip]

Why can’t we CONJECTURE in a civil manner and realize that speculative ideas are based on assumptions and if there is no way to verify the suppositions then you have to accept (at least tentatively) any counter point of view which is based on CURRENT REALITY and is thus much more likely to be valid than any conjecture.

[snip]
 
Last edited:
But it isn't logically consistent to claim that a physical neural network made of transistors could be conscious yet a simulated neural network made of transistors could not.

The only difference is the way the transistor are arranged, not in the way information is processed. The information flow in the latter is entirely isomorphic to that in the former, meaning any causal integration events are fully preserved.
To say that this difference matters is saying that something besides the ability to integrate signals might be important to the functioning of neurons -- but what would that be? We have zero evidence of any such mystery behavior.



Go read about NNs and about computer programming and also about how the brain works (as far as we know) and compare that to the way NNs work and contrast it to the way CPUs and computer programs work and then you will see why the highlighted statement is wrong.
 
Ultimately, since we haven't actually done it yet, this is all anyone can say. This entire thread has been about how much hand-wringing and "gee golly you guys, science doesn't know everything" apologetics are necessary to accompany said postulate.

If that was all that anyone did say, we probably would have just said "dunno" and left it at that. However, there are people who claim that consciousness is computational, and that a sufficiently large computer running the right program would definitely be conscious, and that all alternative theories are magical in nature, and that this is mathematically provable and proven. They claim that if a process is computationally equivalent to another process, then it is functionally equivalent as well.

I can see why there's an argument between the people claiming we do know, and the people who claim we don't. What I don't get is the people who admit we don't know, but who are still siding with the people who claim we do.
 
Last edited:
Go read about NNs and about computer programming and also about how the brain works (as far as we know) and compare that to the way NNs work and contrast it to the way CPUs and computer programs work and then you will see why the highlighted statement is wrong.

Heh, I think being a professional A.I. programmer with more than the equivalent of a molecular biology minor qualifies me as "knowing enough" to discuss this issue without needing to "go read about NNs and computer programming and also about how the brain works."

So having thwarted the awkward and unnecessary credential gauntlet, can I move on to the actual question?

If you look at the organization of transistors in a "neuron emulator," for instance, which is what I would call a set of transistors that are part of a network physically arranged as a NN, it is clear which transistor plays what part in the integration of the input and furthermore exactly what that input is -- since you can trace the circuit back to all the "neuron emulators" upstream ( or downstream, as it were ).

However it is just as clear which transistor plays what part in a computer, you just have to think a little harder. The fact that the transistors are now distributed and re-organized so that a subset of the group performs all the integration while another subset stores the results of the integrations doesn't change that -- you can still point to a given transistor at a given state in the algorithm and say "that transistor is doing this <>" and the explanation can be spot on.

In both cases, you have a set of transistors that change state deterministically based on the previous state and the way the network is laid out, and at any given state you can look at a transistor in the physical NN and point to its exact correlate in the circuit running the simulated NN. Meaning, the sequence of state transitions is isomorphic.

Now if you think there is something special about the actual physical topology of a circuit, then I agree, that is certainly different and it would make a difference. But I don't subscribe to that viewpoint, my opinion is that everything important is in the sequence of deterministic state transitions, and the physical topology of the network is irrelevant as long as it supports the "same" ( isomorphic ) sequence of state transitions.

Also, note that it is easy to prove that the sequences are isomorphic -- I could hook up an LED to each artificial neuron in the physical NN such that the LED lights up when the neuron integrates past the threshold for it to fire, and I could also hook up a computer monitor such that the simulated NN is displayed in a two-dimensional diagram where parts of the screen light up when a certain simulated neuron integrates past the threshold for it to fire. And if you superimposed the two, the light on the screen would match the light from the LEDs -- whether a human was there to see it or not.
 
Last edited:
If that was all that anyone did say, we probably would have just said "dunno" and left it at that. However, there are people who claim that consciousness is computational, and that a sufficiently large computer running the right program would definitely be conscious, and that all alternative theories are magical in nature, and that this is mathematically provable and proven. They claim that if a process is computationally equivalent to another process, then it is functionally equivalent as well.

I can see why there's an argument between the people claiming we do know, and the people who claim we don't. What I don't get is the people who admit we don't know, but who are still siding with the people who claim we do.

Well, it boils down to having faith in the math and logic.

I know it is hard for some people to grasp, for instance, that if you start with zero, and add 1 repeatedly to the number, you will eventually reach 92983928493948234829834983492384.

I don't need to sit there all day and do it, because my knowledge of mathematics tells me that I can reach any positive integer by adding 1 to an integer 1 less than the goal, and by induction this reduces exactly to the base case of adding 1 to zero.

If one didn't have that knowledge, I can see how they would be dubious of the claim that "we know for sure you can reach 92983928493948234829834983492384 by adding 1 to a number repeatedly."

Likewise, if one had the knowledge, but lacked faith in their knowledge, they might say to themselves "math says that for sure you can reach 92983928493948234829834983492384 by adding 1 to a number repeatedly, but that is such a big number -- maybe math is wrong when it comes to such big numbers?"
 
Last edited:
Status
Not open for further replies.

Back
Top Bottom