• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

On Consciousness

Is consciousness physical or metaphysical?


  • Total voters
    94
  • Poll closed .
Status
Not open for further replies.
Studying the pharmacological profile of drugs does little to understand their subjective effects.

However, experiencing all the subjective effects, and then noting that these subjective effects are the result of a molecule which enters the brain, diffuses into synapses and binds to neurotransmitter receptors or ion channels thereby interfering with normal synaptic transmission, leads one to the conclusion that altered synaptic transmission alters subjective experience. (Conversely, subjective experience reduces to synaptic activity).

This inescapable conclusion, together with understanding the effects of brain damage and brain stimulation, combine to make a very strong case that all of subjective experience is strictly a manifestation of brain activity.
 
This inescapable conclusion, together with understanding the effects of brain damage and brain stimulation, combine to make a very strong case that all of subjective experience is strictly a manifestation of brain activity.


Agreed; there seems to always be a testable physical basis for all altered states of consciousness.

However the exception to this rule seems to be altered states of consciousness like sleep.

There is no way to tell if a person is asleep or dreaming from brain waves or tests. A person can be showing all the usual signs of being awake, the prefrontal cortex lights up, the limbic system fires into action, all sorts of brain systems can be on and running as if awake; yet still to this day the only way to tell if a person is awake or asleep is by asking them. Even EEGs of people in REM sleep are practically identical to a person in an awake state.
 
Last edited:
The creative brain is conscious; not logical. It's non computational at core, which is why you can not predict a persons behavior, even if you can model a groups behavior and actions based on averages and statistics.

I've been studying Penrose's Google Talk, and many of your statements sound as if you've bought his theses lock stock and barrel. I find them riddled with baseless assumptions and fallacies. Your statement that consciousness is "non computational at core," virtually word-for-word Penrose, is baseless. How do you know it? Appeal to authority? It's an argument from ignorance: You and Penrose don't know a computational solution to consciousness, so you assume there can't possibly be one.

Why are you even bringing up AI and computational models in a thread about consciousness? Strong AI has no place in science.

I know you are but what am I? (Pee-Wee Herman fallacy)

If Stong AI ever produced software that was about as intelligent as we are then it should be able to reprogram and upgrade itself, leading to Recursive Self Improvement, nearly exponentially so.

Hyper intelligent software may not necessarily decide to support the continued existence of mankind, and would be extremely difficult to stop and shows risk to civilization, humans, and planet Earth. Frindely AI models are fundamentally against the laws of natural selection, and so is unlikely to be successful.

Berglas, Anthony (2008), Artificial Intelligence will Kill our Grandchildren

Do I detect an appeal to consequences fallacy? If we make hyper-intelligent software, it will end mankind? Does this have any bearing on whether or not it's possible?

The problem is that the computers will never be conscious like we are, even if they are more intelligent. They will have no sympathy, empathy or care for our needs.

Exactly how do you know this?

In terms of computational theory an AI researcher need not be a computationalist, because they might believe that computers can do things brains do noncomputationally. Perhaps calling computationalism a theory is not exactly right here. I think "dogma" "working hypthesis" or "working assumption" is more suitable. The evidence for computationalism is not overwhelming, and some even believe it has been refuted, by a priori arguments or empirical evidence.

I want to hear this so-called refutation of computationalism.

Getting back to your previous comment about the Turing test for consciousness, Turing’s test is not necessarily relevant to the computational theory of consciousness. It doesn’t particularly help develop theoretical proposals, and it gets in the way of thinking about intelligent systems that obviously can’t pass the test. Somewhere in this thicket of possibilities there might be an artificial intelligence with an alien form of consciousness that could pretend to be conscious on our terms while knowing full well that it wasn’t. It could then pass the Turing Test by faking it. All this shows is that there is a slight possibility that the Turing Test could be good at detecting intelligence and not so good at detecting consciousness.

Now, you're a hair's breadth from the Philosopher's Zombie issue. If you replicated a human brain in a computer, neuron by neuron, and it behaved exactly like a human brain, to the point where it claimed it had subjective experiences, what's there to argue?
 
And evolution suggests that a sufficiently powerful AI would probably destroy humanity.
This is one of the most silly arguments I have heard for a long time. In what way will an AI destroy humanity? How would an AI live without humanity? In what way is an AI subject to evolution? Does it reproduce? Does it mutate?

The problem is that the computers will never be conscious like we are, even if they are more intelligent. They will have no sympathy, empathy or care for our needs.
If a computer runs an accurate simulation of every molecule in a human brain, just how can you claim it will not have consciousness exactly like humans? You just know?
 
This is one of the most silly arguments I have heard for a long time. In what way will an AI destroy humanity? How would an AI live without humanity? In what way is an AI subject to evolution? Does it reproduce? Does it mutate?


If a computer runs an accurate simulation of every molecule in a human brain, just how can you claim it will not have consciousness exactly like humans? You just know?

I agree that strong AI does not pose any threat to humanity as it will be utterly dependent on us for its mere survival and as such will "evolve" to have a friendly relationship with its biological creators but I would also say that yes indeed, it will evolve and mutate analogous to biological evolution. We will use genetic algorithms to create it, we will introduce randomness of one kind or the other, and we will choose to nurture AIs that are more agreeable and useful and dismiss the ones that are not.

I also agree with you that the simulation would be conscious. There is a slide in the video mentioned above with a thought experiment replacing a single neuron in the brain with a mechanical analogue that does the exact same thing within the biological brain. What if we replaced each and every neuron with one of these analogues? When, if ever, would consciousness disappear? I find the conclusion difficult to accept at some level but cannot think of any reason not to accept that this machine would be conscious.
 
Zeuzzz said:
I seem to remember asking you a similar question before ... and your replies have tended to be vague, up until now
Mr. Scott said:
I'm not following you there. Restate the question if you will. If I feel it's off-topic I may not feel obligated to answer.


Following on from the hilarity of my previous comment being reported, getting a warning and being removed to abandon all hope (despite being perfectly on topic as it relates to consciousness and altered states, ironically much more so than many other posts are) lets continue this discussion here anyway, since that thread was locked.

You are posting in a thread about consciousness. You are posting links to computational models of consciousness. Yet whenever I try to pin you down to demonstrate where exactly in the source code shows consciousness, or what exact behavior shows consciousness, you have tended to either not reply or claim that you never said it was showing consciousness.

So now it's your turn. You are the one posting this sort of material in this thread, with this title.

Go!
 
You are posting in a thread about consciousness. You are posting links to computational models of consciousness. Yet whenever I try to pin you down to demonstrate where exactly in the source code shows consciousness, or what exact behavior shows consciousness, you have tended to either not reply or claim that you never said it was showing consciousness.
Where exactly in a giraffe's DNA is the giraffe?
 
Following on from the hilarity of my previous comment being reported, getting a warning and being removed to abandon all hope (despite being perfectly on topic as it relates to consciousness and altered states, ironically much more so than many other posts are) lets continue this discussion here anyway, since that thread was locked.

You are posting in a thread about consciousness. You are posting links to computational models of consciousness. Yet whenever I try to pin you down to demonstrate where exactly in the source code shows consciousness, or what exact behavior shows consciousness, you have tended to either not reply or claim that you never said it was showing consciousness.

So now it's your turn. You are the one posting this sort of material in this thread, with this title.

Go!

OK, you haven't posted this in the form of a question, but I'll respond anyway.

I really don't know if any present AI implementation achieves consciousness, whatever that really means. Among non-dualists it's generally agreed that consciousness (at least the biological type we're familiar with) is an emergent property that would not, by definition, be local to a line of code or section in a computer program.

If this doesn't answer your question, then it may help if you put it in the form of a question, and I'll try again.

Re: on/off topic, posts glorifying the "whoa, this universe is awesome, man" feeling, I'd agree are off-topic. If you're sure one of my posts is an off-topic derail, report it.
 
Yet whenever I try to pin you down to demonstrate where exactly in the source code shows consciousness
It has been explained many times by now that in neural programming there are no lines that code for the job that the program is going execute. Instead, the program is taught how to do its job, and it is not possible to pinpoint where exactly the job is coded, even after the program has learned to do it.
 
The creative brain is conscious; not logical. It's non computational at core, which is why you can not predict a persons behavior, even if you can model a groups behavior and actions based on averages and statistics.
This is a popular canard. People's behaviour is actually very predictable, both individually and in groups. You just need a reasonable amount of observation of their past behaviour and environment. This predictability is at the heart of social and economic interactions. We are creatures of habit. It is also why we're so surprised when someone's behaviour seems unpredictable - "It's not like her at all...", "He's not himself today..." etc. But even then there's a good chance that if we had a little more detail of their recent experiences, we wouldn't be so surprised.
 
This is a popular canard. People's behaviour is actually very predictable, both individually and in groups. You just need a reasonable amount of observation of their past behaviour and environment. This predictability is at the heart of social and economic interactions. We are creatures of habit. It is also why we're so surprised when someone's behaviour seems unpredictable - "It's not like her at all...", "He's not himself today..." etc. But even then there's a good chance that if we had a little more detail of their recent experiences, we wouldn't be so surprised.

How many is "a reasonable amount of observations"?
How much "more detail" of their recent experiences?

Which model/s will help you make these accurate predictions of human behavior keeping in mind the models used in social and economic interactions have proven close to useless?
 
Agreed; there seems to always be a testable physical basis for all altered states of consciousness.
If you agree with this than where is your argument that consciousness can not be duplicated by a complex machine?

However the exception to this rule seems to be altered states of consciousness like sleep.

What are the other un-named states "like" consciousness you refer to, other than sleep?

There is no way to tell if a person is asleep or dreaming from brain waves or tests. ...yet still to this day the only way to tell if a person is awake or asleep is by asking them.
This is incorrrect, incredibly so. I suggest you read up on EEG and polysomnography.

http://apsychoserver.psych.arizona....kowitz_Monitoring and Staging Human Sleep.pdf

http://www.smrv-journal.com/article/S1087-0792(01)90145-5/abstract

Even EEGs of people in REM sleep are practically identical to a person in an awake state.
[/QUOTE]

If you change the highlighted to "similar but different" you would be closer to being correct.
 
If you agree with this than where is your argument that consciousness can not be duplicated by a complex machine?


Intelligence can be duplicated, human consciousness likely can't. I addressed the whys above in my reply to Mr.Scott as it relates to the Turing test, which turns out more a test for intelligence than consciousness.

What are the other un-named states "like" consciousness you refer to, other than sleep?


Meditation is another altered state of consciousness that is hard to pin down with a definitive physically testable difference in the brain.



You are generalizing. Maybe I did word it clearly enough; REM sleep when we are dreaming is indistinguishable by any testable means from an awake state of consciousness.

If you change the highlighted to "similar but different" you would be closer to being correct.


It does not apply to other states like NREM or Delta. Only REM sleep.
 
OK, you haven't posted this in the form of a question, but I'll respond anyway.


Let me rephrase it more coherently then; what in your opinion is the best evidence that intelligent AI programs are obtaining human consciousness?
 
How many is "a reasonable amount of observations"?
How much "more detail" of their recent experiences?

Which model/s will help you make these accurate predictions of human behavior keeping in mind the models used in social and economic interactions have proven close to useless?

What utter nonsense.

We can predict with very good accuracy whether a person will step in front of a moving car that they are aware of.

We can predict with very good accuracy whether senior citizens who identify themselves as religious Republicans support gay marriage.

We can predict with very good accuracy whether people will spend their money on a piece of consumer electronics.

I'm always curious about this world you live in where science, and even mere rational thought, is completely stupid and pointless, since it certainly isn't the world the rest of us inhabit.
 
Logic will get you from A to B.

Imagination will take you anywhere.

- Albert Einstein.

Re-examine some of your assumed axioms, and build from there.
 
What utter nonsense.

We can predict with very good accuracy whether a person will step in front of a moving car that they are aware of.

We can predict with very good accuracy whether senior citizens who identify themselves as religious Republicans support gay marriage.

We can predict with very good accuracy whether people will spend their money on a piece of consumer electronics.

I'm always curious about this world you live in where science, and even mere rational thought, is completely stupid and pointless, since it certainly isn't the world the rest of us inhabit.

So tell me RD where was the prediction of the damage "Sandy" would inflict?
 
Last edited:
Let me rephrase it more coherently then; what in your opinion is the best evidence that intelligent AI programs are obtaining human consciousness?

Sorry for the delay. Sandy's left me 4 days so far without power or Internet. The good news is I studied, under candlelight, the chess position Penrose says in his Google Talk has a solution that stumps computation but is easily solved by our non-computational brains. He's wrong. Maybe next week I'll have time to elaborate.

Re your question, I doubt any current system has achieved "human consciousness." That's an extremely high bar. How about worm consciousness? If we achieved worm consciousness, would you accept that we could, in theory, scale it up to human consciousness? Why not?

My test for human might be that the machine starts to wonder about the nature of its consciousness even though not specifically programmed to wonder about such things.

What would your test for human consciousness in a machine be? Or, perhaps, dog consciousness? You name the animal.
 
Status
Not open for further replies.

Back
Top Bottom