Are You Conscious?

Are you concious?

  • Of course, what a stupid question

    Votes: 89 61.8%
  • Maybe

    Votes: 40 27.8%
  • No

    Votes: 15 10.4%

  • Total voters
    144
I can send virtual computers via the internet, LAN, or portable disk because they are software. I cannot do the same with computer hardware.

Irrelevant. When a virtual computer is running on a computer, the atoms of both are exactly the same atoms. They might be used differently by each, but nonetheless they are the same.



A simulation of a molecule, no matter how accurate, is just a representation. Thats what makes it a simulation -- it's not the thing in itself. If one wants a physically efficacious molecule, at some point they are going to have to physically create an actual molecule. The same holds true for consciousness.

Clearly.

Luckally, every single simulation ever created is made of actual molecules.

I really don't understand why you don't get this -- do you think the simulated world of Grand Theft Auto IV is computed in some magical void and beamed to your Xbox via faerie-waves? Of course not -- it takes place on the actual molecules of the Xbox hardware.

My point is that unless its actual consciousness it couldn't be said to KNOW anything. A simulated bucket can't hold water.

Aside from the glaring fallacy of circular reasoning you are commiting -- who cares?

If you ask the simulated consciousness if it knows it is conscious, and it says yes, what more can you do?

Your proposal has a couple of fundamental flaws.

One: You presume the ability to produce simulated consciousness without knowledge of what actual consciousness is.

I know consciousness is form rather than substance, as you would say, and that is enough. Why? Because when you are genuinely unconscious, the substance of your brain is exactly the same. The only difference from when you are conscious is the form.

The only possible logically valid argument you can make is that there is something unique to the substance of a biological brain that allows the form of consciousness to arise only there. That is, that the form of consciousness might not be able to arise on silicon or any other substrate.

Of course, that argument is wrong as well -- we have the maths and science to prove it.

Two: You assume that simulation is the same as actualization.

No, I don't assume anything at all.

I know that if there is form instantiated upon some substance, and there is another form instantiated within that form instantiated upon that substance, that both forms are still forms are and thus equivalent in being form rather than substance. Form is form, regardless of how nested or buried the form.

Your grand assumption is that consciousness is a thing that you can hold in your hands, rather than a pattern of behaviors of things. There is no evidence to support such an assumption other than your own ignorance of the issue.
 
Last edited:
I am not sure of the ethics of forwarding the position of a suspended person, but here goes anyway:
UndercoverElephant said:
FedUpWithFaith said:
Consciousness is a "kind" in itself.
And he know this, how?
UndercoverElephant said:
FedUpWithFaith said:
Ironically, it is also worth pointing out that materialism merely assumes causation
Not so.

Materialism relies in no way upon the assumption of causation.

Many immaterialists rely on the concept, for example cosmological arguments rely entirely upon the assumption that causation is a fundamental metaphysical principle.

Even you sometimes rely upon the assumption, as I have often pointed out.

But Materialism does not, in any way, require the assumption.
I have a hypothesis: we can approach the truth, although we will never quite get there, and in order to do so then when somehow have to apply both these tests of truth at the same time. Without a correspondence element to truth, there is no difference between truth and belief. Without a coherency element to truth, you'll end up with lots of useful fragments but no coherent overall system. I suspect that it is only if you could arrive at a totally 100% coherent system which covers everything that you'd have a 100% correspondence with reality. IOW, there's only one coherent system, and it's the one that corresponds perfectly to reality.
You appear to be describing science.

But can you define the correspondence theory of truth? In most formulations it is logically equivalent to "something is true iff it is true".
 
You appear to be describing science.

Science is part of the process I am describing. Coherency certainly matters to science, and so does correspondence with external reality, but science can't ever produce a complete picture of that external reality, which means it can never arrive at a fully complete and coherent system.

For science "correspondence" means "correspondence with physical reality", but "physical reality" is not conceptually broad enough to include consciousness. It follows that scientific truths are only part of the whole picture that the model must correspond with. For "Truth" instead of "scientific truth" the correspondence has to be with the whole of reality, not just the sub-concept of physicality.

Put it another way: we would need both a scientific model and a metaphysical model and reality would have to correspond to both models. Science works. Materialism doesn't. It does not correspond to the whole of reality, only part of it. We have to accept something like materialism as a practical limitation on science, but reject it as part of the metaphysical model we want to correspond to reality.

When we are doing science then we must try to make our models correspond with a presumed external physical reality. When we are doing metaphysics then we are free to think of that external reality as being mathematical/informational. This may sound like it breaks the rule on coherency, but I don't think it actually does. Since we are embodied in the mathematical system ourselves, rather than having a "God's-eye view", we should not expect that reality appear to us as mathematical/informational, even if it is.
 
Last edited:
I can send virtual computers via the internet, LAN, or portable disk because they are software. I cannot do the same with computer hardware.

Irrelevant. When a virtual computer is running on a computer, the atoms of both are exactly the same atoms. They might be used differently by each, but nonetheless they are the same.

When I transfer software from one machine to another I am not transferring atoms. I'm transferring patterns.


A simulation of a molecule, no matter how accurate, is just a representation. Thats what makes it a simulation -- it's not the thing in itself. If one wants a physically efficacious molecule, at some point they are going to have to physically create an actual molecule. The same holds true for consciousness.

Clearly.

Luckally, every single simulation ever created is made of actual molecules.

I'm sorry but a computer simulation of uranium is neither uranium nor is it made of uranium.

I really don't understand why you don't get this -- do you think the simulated world of Grand Theft Auto IV is computed in some magical void and beamed to your Xbox via faerie-waves? Of course not -- it takes place on the actual molecules of the Xbox hardware.

I don't get your seeming inability to distinguish between abstraction and actuality. You point out yourself that the simulated world of GTA is simulated ON the molecules of the game console's hardware. Software is just a pattern, which is why it can be so easily transfered from platform to platform, device to device, substrate to substrate. The characters in Grand Theft Auto are not made of meat, nor are the virtual cars they drive made of metal -- they are are just abstract representations drawn on hardware.


My point is that unless its actual consciousness it couldn't be said to KNOW anything. A simulated bucket can't hold water.

Aside from the glaring fallacy of circular reasoning you are commiting -- who cares?

Subjective experience is the quintessential feature of consciousness. If one is trying to simulate a brain and it physically lacks the capacity to generate subjective experience it CANNOT be said to know anything.

If you ask the simulated consciousness if it knows it is conscious, and it says yes, what more can you do?

If I write a program to respond to text query "Are you conscious?" with "Yes" it doesn't that its conscious or that its even aware of the query.

Your proposal has a couple of fundamental flaws.

One: You presume the ability to produce simulated consciousness without knowledge of what actual consciousness is.

I know consciousness is form rather than substance, as you would say, and that is enough. Why? Because when you are genuinely unconscious, the substance of your brain is exactly the same. The only difference from when you are conscious is the form.

The only possible logically valid argument you can make is that there is something unique to the substance of a biological brain that allows the form of consciousness to arise only there. That is, that the form of consciousness might not be able to arise on silicon or any other substrate.

We know that conscious experience only occurs when the brain is in particular energetic states indicating that whatever consciousness is, its closely correlated with the biophysics specific to those brain states. Ergo, its the physical properties of the substrate thats essential to producing consciousness. Until the fields of biophysics and neuroscience progress enough to provide us with an understanding of what those properties are you've no grounds for claiming knowledge of how to artificially produce, or even simulate consciousness.

Of course, that argument is wrong as well -- we have the maths and science to prove it.

Specifics please?

Two: You assume that simulation is the same as actualization.

No, I don't assume anything at all.

So is a simulated apple the same as an actual apple?

I know that if there is form instantiated upon some substance, and there is another form instantiated within that form instantiated upon that substance, that both forms are still forms are and thus equivalent in being form rather than substance. Form is form, regardless of how nested or buried the form.

Your grand assumption is that consciousness is a thing that you can hold in your hands, rather than a pattern of behaviors of things. There is no evidence to support such an assumption other than your own ignorance of the issue.

I never said that consciousness is "a thing you can hold in your hands". I said that minds are physical objects, in themselves [in case you weren't aware, the vast majority of physical objects are not solid things you can hold in your hands]. I've also pointed earlier in this, and other discussions, that whatever minds are they are not made of the neural substrate but something generated by them.
 
Last edited:
AkuManiMani said:
Subjective experiences are not numerical outputs -- they're*physical*products of brain activity. If one wants to produce consciousness they must first understand the*physics*of how the brain produces it. Hunting for the 'right' switching pattern won't cut it.

I don't know how to say any of this without sounding mean or at least demeaning, so I'm just going to say it...........

I'm having a really hard time understanding the rules of this game. You say that computation is not the answer, then imply that this involves hitting 'the right switch'. You call computation purely abstract (implying that it just spits out a number) and then tell me that we need a physical system while referring to what a computer does in terms of its physics ('switches').

I don't care what we call it – computation, information processing, whatever. It is realized in a physical system, so it potentially has causal properties. What is important is not that it is physical but that it is causal (of course something must be physical to be causal by our current definition of causal). And, I'm terribly sorry, but computers do have causal properties unless you want to tell me that what pops up on the screen is a mirage and my printer just makes it up as it goes along.

I know that there is a supposed critique of computationalism that consists of calling it 'a pure abstraction'. This critique is a straw-man, however, because it treats computation as abstraction, it defines it as pure abstraction. It does not follow that computation performed in a physical system does not have causal properties. Just as in the brain, in a computer there are actual physical processes taking place unless you want to tell me that nothing actually occurs in a computer or that it is all spiritual in some sense.

If you want to argue that computers can't do it for other reasons, then that's fine; but this 'it's all just abstraction' argument is simply wrong. Searle uses the argument all the time; and he also argues that we can't find computation in nature. I'm sorry, but neurons just go right along summating inputs no matter what he says about this issue. If that is not computation, or information processing, then I don't know what is. And it occurs whether we know about it or not; we needn't define it as computation for it to do what it does.

Motility, while necessary for survival, is not relevant to consciousness, per se. /


This is actually a critically important issue. If you do not understand why, then there may be no sense in continuing a discussion.

First, there is the issue of simulating movement. The combinatorial explosion of necessary information to carry this out in the real world would probably require computing power on the order of the currently known universe. There are several feedback and feed-forward systems that our brain and body use not to mention the issues involved in dealing with other entities that appear to have conscious control. There are simply too many contingencies for a fixed system to be able to deal with. Such a simulation would be quite impossible in the real world barring some incredibly unaccountable breakthrough in computing science.

Second, I'm going to have to repeat the same issue as above: what exactly are the rules of this game we are playing? We are allowed to simulate the motor system but not sensorimotor integration? Who decided that? You guys keep arguing that simulations are not the real thing, but you then turn around and want to argue that some simulations are just fine as if that part of the situation (the motor system) is just not important? I'm sorry, but this can only arise from a serious misunderstanding of what the nervous system is and what it does. If you really think that motor output is not important to this question – and constant updating of motor planning and where the body is in space – then you have no hope of understanding how the brain 'creates' consciousness, let alone how we could duplicate it in another system.

Third, and this relates to a comment of yours directed at Pixy, if you think that neurochemicals have some special properties that allow them to do what they do, then I'm sorry to tell you that you really don't understand the nervous system or how it works. Dopamine does what it does not because it has some special properties to promote pleasure but because it just happens to be the neurotransmitter used in the ventromedial system. We have different neurotransmitters because it allows segregation of one system from another, not because the neurotransmitters carry any special properties. They are just chemicals; it is the system that does what needs to be done. Anything with the right receptor linked in the right way could do the same 'thing'. We don't see with our occipital cortex because it has some special properties of vision, but because it is hard-wired to the eyes. Link visual information to the auditory cortex and areas 41 and 42 will begin to see instead of hear (as long as sensory information from the ears goes elsewhere).

The only issue having to do with the 'properties of neurochemicals' is how to duplicate what metabotropic receptors are responsible for doing in the brain. That is a much harder engineering feat, though it is probably not too hard to accomplish.
 
In quick reply to FedUpWithFaith, who I am very sorry to see go..........

Yes, a causal account or any scientific account of consciousness will never cover every single aspect of consciousness because it won't be consciousness or experience.

But neither is any other scientific or causal account.

A causal account of running does not move. A causal account of digestion does not break down food.

Causal accounts explain. That is what the scientific study of consciousness is concerned with - an explanation.
 
AkuManiMani said:
Subjective experiences are not numerical outputs -- they're*physical*products of brain activity. If one wants to produce consciousness they must first understand the*physics*of how the brain produces it. Hunting for the 'right' switching pattern won't cut it.

I don't know how to say any of this without sounding mean or at least demeaning, so I'm just going to say it...........

LOL! I think I can take it ;)

I'm having a really hard time understanding the rules of this game. You say that computation is not the answer, then imply that this involves hitting 'the right switch'. You call computation purely abstract (implying that it just spits out a number) and then tell me that we need a physical system while referring to what a computer does in terms of its physics ('switches').

I don't care what we call it – computation, information processing, whatever. It is realized in a physical system, so it potentially has causal properties. What is important is not that it is physical but that it is causal (of course something must be physical to be causal by our current definition of causal). And, I'm terribly sorry, but computers do have causal properties unless you want to tell me that what pops up on the screen is a mirage and my printer just makes it up as it goes along.

I don't think you're understanding what I'm getting at. Lets say one has two computers; The first is one of those big hulking punch card machines from back in the day, and the other is a simple handheld calculator with an LCD display. Lets suppose that one uses them to solve the same arithmetic problem. One produces an answer in the form of holes punched in a card and the other displays numerical symbols on a liquid crystal display. Computationally they performed the same ops and produced the same numerical outputs [tho they used different symbols], but physically the produced very different results via very different physical means.

Subjective experiences are PHYSICAL effects of brain activity just as magnetic fields are PHYSICAL effects of electrical currents. Do you get what I'm saying?
 
Last edited:
Science is part of the process I am describing. Coherency certainly matters to science, and so does correspondence with external reality, but science can't ever produce a complete picture of that external reality, which means it can never arrive at a fully complete and coherent system.
But the question is, can anything arrive at a complete picture? And what does "external" mean?

External to what?
For science "correspondence" means "correspondence with physical reality"
Not so, as I always point out to you. In science correpondence means correspondence to repeatable quantifiable observation.

Hawking said that it is meaningless to ask if scientific models correspond to reality. Einstein said that to call something real was as meaningful as calling it cock-a-doodle do, Mach said that science would be the same even if we simply dreamed the world.

They were scientists weren't they?
but "physical reality" is not conceptually broad enough to include consciousness.
I don't see why not. But it is, in any case, irrelevant.
It follows that scientific truths are only part of the whole picture that the model must correspond with. For "Truth" instead of "scientific truth" the correspondence has to be with the whole of reality, not just the sub-concept of physicality.
If so you must define what this non-physical reality is and demonstrate that it exists.
Put it another way: we would need both a scientific model and a metaphysical model and reality would have to correspond to both models. Science works. Materialism doesn't.
That is only repeating your claim that Materialism is false, finding new ways of repeating a claim does not justify it.
It does not correspond to the whole of reality, only part of it. We have to accept something like materialism as a practical limitation on science, but reject it as part of the metaphysical model we want to correspond to reality.

When we are doing science then we must try to make our models correspond with a presumed external physical reality.
Again - external to what? I don't accept that there is an "external" and "internal". You have to define these terms and show that they are meaningful statements about reality.
When we are doing metaphysics then we are free to think of that external reality as being mathematical/informational.
As we are when we are doing science. The question is, what good do such conjectures do?

What can we explain using metaphysics and science that we can't explain with science alone?
 
Last edited:
Ergo, its the physical properties of the substrate thats essential to producing consciousness. Until the fields of biophysics and neuroscience progress enough to provide us with an understanding of what those properties are you've no grounds for claiming knowledge of how to artificially produce, or even simulate consciousness.

Yay, its so fun to ride the merry-go-round of ignorance!

Lets see if I can make this simple enough for you to get it:

1) Living neurons, in any number, with no activity don't give rise to consciousness.
2) Living neurons, in any number, with random activity don't give rise to consciousness.
3) Living neurons, in any number, with determined activity don't give rise to consciousness.
4) A highly complex network of living neurons featuring coordinated determined activity does give rise to consciousness.

It doesn't take a genius to see the difference between statement 4 and all the rest.

It does perhaps require a little education.
 
But the question is, can anything arrive at a complete picture?

Yes, but it would have no way of knowing for sure that it was a complete picture.

And what does "external" mean?

External to what?

External to the individual human mind which is engaged in a search for truth.

Not so, as I always point out to you. In science correpondence means correspondence to repeatable quantifiable observation.

Fine. The difference is irrelevant from my POV. "Correspondence" still means something different in metaphysics.

Hawking said that it is meaningless to ask if scientific models correspond to reality. Einstein said that to call something real was as meaningful as calling it cock-a-doodle do, Mach said that science would be the same even if we simply dreamed the world.

They were scientists weren't they?

And many other scientists have said different things. They were all scientists.

If so you must define what this non-physical reality is and demonstrate that it exists.

I can define it, but I can't demonstrate that it exists - not to a scientific standard of the meaning of "demonstrate". Why should I have to demonstrate that unobservable metaphysical entities exist?

I am taking into account information which I only at arrived at via a process which involved inherently subjective factors which I cannot demonstrate to you. Only you could demonstrate them to you, and I can't help you much in that task.

Again - external to what? I don't accept that there is an "external" and "internal". You have to define these terms and show that they are meaningful statements about reality.

"Internal" means "my consciousness". Hard problem or no hard problem, the thing which is try to make a model which corresponds to external reality is located inside my head. That is, even though I am arguing that consciousness is about more than just brains, its just brains that are making the model - cognition is entirely brain-based. But there's two types of "external" to that. There's "external" in the sense that brains are located in a physical reality so we need a correspondence between a physical model and that physical reality. There's also "external" in the sense that physical reality is not the same as metaphysical or noumenal reality. So you end up with two sets of correspondences, a noumenal set and a sub-set of those which correspond to physical reality.

As we are when we are doing science. The question is, what good do such conjectures do?

Improve coherence.

What can we explain using metaphysics and science that we can't explain with science alone?

All sorts of things, including consciousness, morality, religion and art. We can provide holistic explanations instead of partial ones. "Holistic", for me, includes both science and metaphysics, as well as all the other things I just listed, but the rules for holistic reasoning in science are not the same as the rules for holistic reasoning on a wider scale, because on the wider scale we can also take account of inherently subjective things that science cannot deal with because of their inherently subjective nature.

In order for science to work, subjectivity needs to be minimised in order to achieve the most objective result. On a wider scale, subjective factors also need to be taken into account, even though the final goal is to be objective as possible about the Whole System. The Whole System includes subjectivity.
 
Last edited:
This explanation suggests that only the new pathway is conscious - events in the old pathway, going though the colliculus and guiding the hand movement can occur without you the person being conscious of it! Why? Why should one pathway alone or its computational style perhaps lead to conscious awareness, whereas neurons in a parallel part of the brain, the old pathway can carry out even complex computations without being conscious. Why should any brain event be associated with conscious awareness given the "existence proof" that the old pathway through the colliculus can do its job perfectly well without being conscious? Why can't the rest of the brain do without consciousness? Why can't it all be blindsight in other words?

We can't answer this question directly yet but as scientists the best we can do is to establish correlations and try and home in on the answer. We can make a list of all brain events that reach consciousness and a list of those brain events that don't. We can then compare the two lists and ask, is there a common denominator in each list that distinguishes it from the other? Is it only certain styles of computation that lead to consciousness? Or perhaps certain anatomical locations that are linked to being conscious? That's a tractable empirical question and once we have tackled that, it might get us closer to answering what the function of consciousness might be, if any, and why it evolved.

Vilayanur S. Ramachandran
This seems to me to be the sensible path, neither to regard consciousness as an intractable problem, nor to regard it as something already understood.
 
I don't think you're understanding what I'm getting at. Lets say one has two computers; The first is one of those big hulking punch card machines from back in the day, and the other is a simple handheld calculator with an LCD display. Lets suppose that one uses them to solve the same arithmetic problem. One produces an answer in the form of holes punched in a card and the other displays numerical symbols on a liquid crystal display. Computationally they performed the same ops and produced the same numerical outputs [tho they used different symbols], but physically the produced very different results via very different physical means.

Subjective experiences are PHYSICAL effects of brain activity just as magnetic fields are PHYSICAL effects of electrical currents. Do you get what I'm saying?

I'm sorry, I assumed you were making a strong argument, but you're not.

If you are going to argue that only brains can do it, then you need an argument, not just your opinion, which is what the above amounts to. That different ways of performing the same operation in different systems produce results in different forms is totally beside the point. What is important when it comes to this sort of problem is that we duplicate what occurs, in basic essence, in the brain. That is the whole point behind current attempts at AI, the Blue Brain project in particular.

Your argument amounts to this: computation/information processing is not sufficient in and of itself. Well, everyone knows that. If a group of people do the computations necessary for what amounts to consciousness they will not produce consciousness because they don't do it in the right form -- it won't be causal and the time element (which is incredibly important for these processes) will be missing. It isn't computation or information processing but computation or information processing performed in a particular way that can produce consciousness.

Others have provided arguments as to why they think it is impossible for a computer to do it. The primary argument has to do with computers working in a purely abstract way -- which is clearly wrong.

The other is that what computers do -- if we label it computation or information processing -- is a purely observer dependent process rather than being intrinsic. This is true for most instances of computation and information processing, but there are also clear intrinsic instances of both computation and information processing -- namely when we do it.

When it comes to the nervous system, we see a clear example of summation in neurons constrained by a natural process -- natural selection. Natural selection essentially defines what neurons do as information processing. We design computers to mimic or duplicate that process -- so we define the information processing in a computer as natural selection did for our brains/spinal cords/etc. What is important is that a computer do it in a similar way and that it be causal.

Your argument does not amount to a reason why it is not possible for a computer to result in consciousness, so please do not pretend that it does.
 
This seems to me to be the sensible path, neither to regard consciousness as an intractable problem, nor to regard it as something already understood.

Yech! :(

Sorry, but this "sensible path" involves a version of neutral monism which would appear to be indistinguishable from materialism. I find Ramachandran to be a strange case. He's the only person I know who claims to be a neutral monist, but who also holds views which I consider to be necessarily materialistic and not neutral monist at all. I don't get it.
 
External to the individual human mind which is engaged in a search for truth.
That appears to involve a metaphysical assumption - the assumption of dualism.

My own view is that I am part of the environment I observe and that it is part of me - that there is no fundamental difference between the inside and the outside. I don't assume the view is true but it is at least plausible and so I cannot accept that "external" is a meaningful way of describing me and not me.
Fine. The difference is irrelevant from my POV. "Correspondence" still means something different in metaphysics.
So what does it mean in metaphysics?
And many other scientists have said different things. They were all scientists.
Exactly, so you can't take any one metaphysical view and say that it is the metaphysics of science. Science is metaphysically neutral, even if individual scientists are not.
I can define it,
So what is your definition of non-physical reality?
but I can't demonstrate that it exists - not to a scientific standard of the meaning "demonstrate". Why should I have to demonstrate that unobservable metaphysical entities exist?
How is something going to help you get at the truth if you don't know whether that thing is real or imaginary?
"Internal" means "my consciousness".
so "internal" means your consciousness.

Which would make my consciousness external, wouldn't it?
Hard problem or no hard problem, the thing which is try to make a model which corresponds to external reality is located inside my head. That is, even though I am arguing that consciousness is about more than just brains, its just brains that are making the model - cognition is entirely brain-based. But there's two types of "external" to that. There's "external" in the sense that brains are located in a physical reality so we need a correspondence between a physical model and that physical reality. There's also "external" in the sense that physical reality is not the same as metaphysical or noumenal reality. So you end up with two sets of correspondences, a noumenal set and a sub-set of those which correspond to physical reality.
But trying to reach correspondence between the model and "metaphysical or noumenal reality" would be a waste of time unless you could show the term to be meaningful.
Improve coherence.
So far it seems to be going in the opposite direction.
All sorts of things, including consciousness, morality, religion and art.
So what kind of an explanation do you think your method would provide of any of these things that would be different to a scientific explanation and by what method would it reach that explanation?
 
Yech! :(

Sorry, but this "sensible path" involves a version of neutral monism which would appear to be indistinguishable from materialism.
I agree, not with the yech, but that Ramachandran has simply stated a Materialistic position.
 
Last edited:
Yech! :(

Sorry, but this "sensible path" involves a version of neutral monism which would appear to be indistinguishable from materialism. I find Ramachandran to be a strange case. He's the only person I know who claims to be a neutral monist, but who also holds views which I consider to be necessarily materialistic and not neutral monist at all. I don't get it.
It's actually very simple.

The world behaves in a way entirely compatible with materialism. However, materialism presents a statement of what the world is - not just how it behaves - and that statement is unproveable.

A neutral monism is perfectly acceptable, but it is to all pragmatic intents and purposes identical to materialism, or wrong.

If you disagree with materialism on how the world behaves, if you disagree with naturalism, you're wrong.
 
I don't think you're understanding what I'm getting at. Lets say one has two computers; The first is one of those big hulking punch card machines from back in the day, and the other is a simple handheld calculator with an LCD display. Lets suppose that one uses them to solve the same arithmetic problem. One produces an answer in the form of holes punched in a card and the other displays numerical symbols on a liquid crystal display. Computationally they performed the same ops and produced the same numerical outputs [tho they used different symbols], but physically the produced very different results via very different physical means.

Subjective experiences are PHYSICAL effects of brain activity just as magnetic fields are PHYSICAL effects of electrical currents. Do you get what I'm saying?
We get what you're saying. We always did, though I did check to make sure.

What we don't get is why you think - contrary to the Church-Turing-Deutsch thesis - that it can possibly be relevant.
 
AkuManiMani said:
Ergo, its the physical properties of the substrate thats essential to producing consciousness. Until the fields of biophysics and neuroscience progress enough to provide us with an understanding of what those properties are you've no grounds for claiming knowledge of how to artificially produce, or even simulate consciousness.

Yay, its so fun to ride the merry-go-round of ignorance!

Lets see if I can make this simple enough for you to get it:

1) Living neurons, in any number, with no activity don't give rise to consciousness.
2) Living neurons, in any number, with random activity don't give rise to consciousness.
3) Living neurons, in any number, with determined activity don't give rise to consciousness.
4) A highly complex network of living neurons featuring coordinated determined activity does give rise to consciousness.

It doesn't take a genius to see the difference between statement 4 and all the rest.

It does perhaps require a little education.

Now use that knowledge design a device that experiences the sensation of "cold" when its at room temperature or above.
 
I don't think you're understanding what I'm getting at. Lets say one has two computers; The first is one of those big hulking punch card machines from back in the day, and the other is a simple handheld calculator with an LCD display. Lets suppose that one uses them to solve the same arithmetic problem. One produces an answer in the form of holes punched in a card and the other displays numerical symbols on a liquid crystal display. Computationally they performed the same ops and produced the same numerical outputs [tho they used different symbols], but physically the produced very different results via very different physical means.

Subjective experiences are PHYSICAL effects of brain activity just as magnetic fields are PHYSICAL effects of electrical currents. Do you get what I'm saying?

I'm sorry, I assumed you were making a strong argument, but you're not.

If you are going to argue that only brains can do it, then you need an argument, not just your opinion, which is what the above amounts to.

You're not understanding. I've never argued that it isn't possible for anything but our brains to produce consciousness. I said that until we understand the physics of how our brains produce consciousness [and the range of subjective experiences that come with it] we cannot design it into any artificial system.

That different ways of performing the same operation in different systems produce results in different forms is totally beside the point. What is important when it comes to this sort of problem is that we duplicate what occurs, in basic essence, in the brain. That is the whole point behind current attempts at AI, the Blue Brain project in particular.

The point of my example was to illustrate what I mean when I distinguish between physical effect and computational output.

Your argument amounts to this: computation/information processing is not sufficient in and of itself. Well, everyone knows that.

Judging from the many of the posts here I would have to say no. Everyone doesn't "know that".

If a group of people do the computations necessary for what amounts to consciousness they will not produce consciousness because they don't do it in the right form -- it won't be causal and the time element (which is incredibly important for these processes) will be missing. It isn't computation or information processing but computation or information processing performed in a particular way that can produce consciousness.

Others have provided arguments as to why they think it is impossible for a computer to do it. The primary argument has to do with computers working in a purely abstract way -- which is clearly wrong.

The other is that what computers do -- if we label it computation or information processing -- is a purely observer dependent process rather than being intrinsic. This is true for most instances of computation and information processing, but there are also clear intrinsic instances of both computation and information processing -- namely when we do it.

When it comes to the nervous system, we see a clear example of summation in neurons constrained by a natural process -- natural selection. Natural selection essentially defines what neurons do as information processing. We design computers to mimic or duplicate that process -- so we define the information processing in a computer as natural selection did for our brains/spinal cords/etc. What is important is that a computer do it in a similar way and that it be causal.

Your argument does not amount to a reason why it is not possible for a computer to result in consciousness, so please do not pretend that it does.

If you've been following my posting history you would know that I've NEVER argued that it's not possible to create conscious computers; after all, our brains are examples of such computers. My argument is, and always has been, that consciousness qua consciousness is -not- a computation.
 
Last edited:

Back
Top Bottom