• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

On Consciousness

Is consciousness physical or metaphysical?


  • Total voters
    94
  • Poll closed .
Status
Not open for further replies.
Well, it says that ultrasound has a mild and delayed effect on mood. No apparent change in consciousness. Results from double-blind test not reported. This report doesn't pass skeptical analysis.

I've heard that fetal ultrasound subjects had a higher than average rate of dyslexia. Perhaps ultrasound can jumble up synapses and interconnections. Maybe we can think of it as an extremely mild lobotomy or ECT. The experiments in that report should have been performed on animals first.
http://www.quantumconsciousness.org/documents/TUSinpress2.pdf
 
Neurons of the cerebral neocortex in mammals, including humans, are generated during fetal life in the proliferative zones and then migrate to their final destinations by following an inside-to- outside sequence. The present study examined the effect of ultra- sound waves (USW) on neuronal position within the embryonic cerebral cortex in mice. We used a single BrdU injection to label neurons generated at embryonic day 16 and destined for the superficial cortical layers. Our analysis of over 335 animals reveals that, when exposed to USW for a total of 30 min or longer during the period of their migration, a small but statistically significant number of neurons fail to acquire their proper position and remain scattered within inappropriate cortical layers andor in the subja- cent white matter. The magnitude of dispersion of labeled neurons was variable but systematically increased with duration of expo- sure to USW. These results call for a further investigation in larger and slower-developing brains of non-human primates and contin- ued scrutiny of unnecessarily long prenatal ultrasound exposure.
http://m.pnas.org/content/103/34/12903.full.pdf
 
Thank you for at least admitting that no matter how complex, powerful or aware the computer is, you will never accept that it is conscious.


But you see, there lies the problem ;) As David Deutsh said: "I dont think AI will be achieved until philosophical progress is made in understanding what consciousness is."

What is the concept of a replicator? Do we have the equivalent concept for consciousness? :(
 
What is this "right environment" that you are talking about? Is the brain filled with pixie dust that can't be replicated?

I don't know what you mean by pixie dust. :(

By environment I mean simply the data that the AI will perceive. It's sensory input. If it becomes self-aware, will it have human motives? Could it be that it will want to eat, absorb, hunt and compete ?

Sorry, I am just asking. But it's true, I am not really convinced yet. I keep learning though :)
 
By the same token, if we simulate a calculator we should only expect to get artificial numbers out of it, not real numbers.

Do you know something about your own consciousness that isn't itself a piece of information?


That's entirely different. The calculator's data are given by me. For example I press [2+2=] and that is "my" demonstrative statement. It's outcome is based on it's data. Denying the statement that 2+2=4 is a logical contradiction.
If an AI perceives of you having two apples in one hand and 2 in the other, will it ask you to give it an apple? Will it have a motive? I am simply saying that it's senses and ideas will be entirely different than ours because it's needs will be different. How will it be able to have consciousness without a prior kick from the environment? I am saying this because I believe that consciousness is partially formed by perception of the environment.
Our own needs, the competition, the struggle to survive and not be eaten by a lion.. That's how our brain evolved. How will the AI experience all that in a lab, like a brain in a jar? How will it produce the necessary simulated endorphins?

The only solution I think is for us to create another Universe as AI's environment. The only thing we need is a hundred-thousandth grain of matter :D ( I dont know, just kidding)


As for the question "Do you know something about your own consciousness that isn't itself a piece of information?"

No. Because if I did, I would not be in this reality. Or I would be a dualist. But could AI answer that question in 10 years from now? I just find it difficult to believe. I am saying that it will take more time.
 
I don't know what you mean by pixie dust. :(

By environment I mean simply the data that the AI will perceive. It's sensory input.
As discussed up-thread, the AI will need to interact with us in some way to be recognised as conscious. That does not present a problem.

If it becomes self-aware, will it have human motives? Could it be that it will want to eat, absorb, hunt and compete ?
Well, sort of.

Those are attributes possessed even by creatures that aren't self-aware, like our sphex wasp. But they're essentially hard-wired by evolution, because the evolutionary term for creatures without those attributes is "food".

We build similar functions into some of our devices - a Roomba, for example, will avoid obstacles, gather dirt, and return to its feeding station when its battery gets low.

The difference with humans (and with human-level artificial minds) is that we can examine, in an abstract way, the meaning of those motivations. But an artificial mind will not have the biological basis for those motivations, so it depends on exactly how the AI is created.

If it's a simulation of a human brain, then it will come with a simulation of the neural pathways involved. But if we're not providing the physiological signals of hunger, those pathways won't be active.
 
I don't see why you couldn't give some proto-AI a simulated environment to perceive.


Exactly ;) But. In this environment, with all its tentative applications and provisions (supposedly working) it would take many attempts to finally create a brain of an artificial simulation of a being capable of self-awareness and cerebral thinking.
 
That's entirely different. The calculator's data are given by me. For example I press [2+2=] and that is "my" demonstrative statement. It's outcome is based on it's data. Denying the statement that 2+2=4 is a logical contradiction.

Sorry, but you missed my point. You said an artificial brain would produce "artificial awareness, not simulation of human consciousness". But awareness is a sort of information, and "artificial information" is indistinguishable from "real information" if the content is the same. That "4" from a simulated calculator isn't an "artificial 4".

If an AI perceives of you having two apples in one hand and 2 in the other, will it ask you to give it an apple? Will it have a motive? I am simply saying that it's senses and ideas will be entirely different than ours because it's needs will be different. How will it be able to have consciousness without a prior kick from the environment? I am saying this because I believe that consciousness is partially formed by perception of the environment.
Our own needs, the competition, the struggle to survive and not be eaten by a lion.. That's how our brain evolved. How will the AI experience all that in a lab, like a brain in a jar? How will it produce the necessary simulated endorphins?
There's no reason any functional aspect you describe couldn't be included in the an AI. Innate desires and fears would be in the form of hard-wired goals of what to seek and what to avoid. It could be taught to call those pleasures and pains, same as we are taught. Endorphins are an implementation detail, just one way of achieving the same functionality.

Most arguments such as yours boil down to equating irrelevant details of human behavior with consciousness in general. That simply begs the question of AI consciousness. It's as unhelpful as saying airplanes can't possibly fly because they don't have feathers. You'll just be playing that game until you include a clear definition of consciousness.
 
Sorry, but you missed my point. You said an artificial brain would produce "artificial awareness, not simulation of human consciousness". But awareness is a sort of information, and "artificial information" is indistinguishable from "real information" if the content is the same. That "4" from a simulated calculator isn't an "artificial 4".


There's no reason any functional aspect you describe couldn't be included in the an AI. Innate desires and fears would be in the form of hard-wired goals of what to seek and what to avoid. It could be taught to call those pleasures and pains, same as we are taught. Endorphins are an implementation detail, just one way of achieving the same functionality.

Most arguments such as yours boil down to equating irrelevant details of human behavior with consciousness in general. That simply begs the question of AI consciousness. It's as unhelpful as saying airplanes can't possibly fly because they don't have feathers. You'll just be playing that game until you include a clear definition of consciousness.

You said "numbers".. As if real numbers... :)
Not really. Real numbers???
Are you sure you want to go there?

As for the rest I agree with you, I am just not optimistic. And I think AI will emerge only in a simulated environment, that's all I am saying. Human consciousness is fundamentally different. That does not mean that I am a believer in some kind of ..how u say.. pixie dust hocus pocus.. whatever. I agree with you.
Airplanes and flight is also an unhelpful example as much as it appears to be convenient. If you think that consciousness is a scientific term, then why are we even arguing? :)
 
You said "numbers".. As if real numbers... :)
Not really. Real numbers???
Are you sure you want to go there?
I'm saying there can be no artificial/real distinction for any information, including numbers. In other words the result from this simulation of a calculator is still just 4, not "artificial 4". Likewise, if some agent (human or AI) has awareness, it's never "artificial awareness".

As for the rest I agree with you, I am just not optimistic. And I think AI will emerge only in a simulated environment, that's all I am saying. Human consciousness is fundamentally different. That does not mean that I am a believer in some kind of ..how u say.. pixie dust hocus pocus.. whatever. I agree with you.

Airplanes and flight is also an unhelpful example as much as it appears to be convenient. If you think that consciousness is a scientific term, then why are we even arguing? :)

Ok, but I'm having a hard time spotting what it is that we agree on here. I don't see anything fundamentally different between what AI is capable of and any particular well-defined subset of brain behavior that someone chooses to label "consciousness".
 
You said "numbers".. As if real numbers... :)
Not really. Real numbers???
Are you sure you want to go there?

As for the rest I agree with you, I am just not optimistic. And I think AI will emerge only in a simulated environment, that's all I am saying. Human consciousness is fundamentally different. That does not mean that I am a believer in some kind of ..how u say.. pixie dust hocus pocus.. whatever. I agree with you.
Airplanes and flight is also an unhelpful example as much as it appears to be convenient. If you think that consciousness is a scientific term, then why are we even arguing? :)

From what?

Ninjaed by Pixy
 
Last edited:
Notice that architects use real calculators, not simulated calculators. The results of simulated calculators are the same, but not real, and therefore buildings made using their calculations would not be stable.

A complete computer simulation of a brain would behave exactly like a person, though if it reported the mystery of its conscious subjective experience, it would by lying (according to someone else on this thread who I don't feel like naming right now).
 
Notice that architects use real calculators, not simulated calculators. The results of simulated calculators are the same, but not real, and therefore buildings made using their calculations would not be stable.

A complete computer simulation of a brain would behave exactly like a person, though if it reported the mystery of its conscious subjective experience, it would by lying (according to someone else on this thread who I don't feel like naming right now).

The whole point of a simulation is about predicting physical evidence.
If the physical evidence does not match the simulation then the simulation is wrong.
What computationalists keep claiming is that when it comes to consciousness the simulation is the evidence.
Why should every other use of simulations in science require physical evidence for its support and the simulation of consciousness doesn't?

Arguing that the right simulation will predict physical evidence of consciousness is different than arguing the right simulation will be conscious.
 
The whole point of a simulation is about predicting physical evidence.
Not in this case. The point of this simulation would be to prove that consciousness can be achieved with artificially.

What computationalists keep claiming is that when it comes to consciousness the simulation is the evidence.
If it achieves consciousness, then yes, it would be evidence that consciousness can be achieved artificially. What did you expect?[/QUOTE]
 
Not in this case. The point of this simulation would be to prove that consciousness can be achieved with artificially.


If it achieves consciousness, then yes, it would be evidence that consciousness can be achieved artificially. What did you expect?
[/QUOTE]
A simulation does not "achieve" something.
It predicts something.
For example a simulation of water moving down a pipe does no "achieve" water moving down a pipe, it predicts how water moves down a pipe if the simulation matches the physical evidence of water moving down a pipe.
What you are saying is that simulation is conscious if the simulation predicts consciousness.

By this logic the equation
6 CO2 + 12 H2O + photons → C6H12O6 + 6 O2 + 6 H2O
IS photosynthesis.

And a map is the territory.
 
Status
Not open for further replies.

Back
Top Bottom