• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

My take on why indeed the study of consciousness may not be as simple

What is most bizarre about this whole discussion is that the more I say that the aim should be to find a physical explanation for consciousness, the more people seem to interpret that as a denial that there can possibly be a physical explanation for consciousness.
Let's back up.

Paul C. Anagnostopoulos said:
Why would systematically replacing each of my neurons with a little computer not do the trick?

Why would it? I think that the burden of proof is on those making the claim. There has to be more to the argument than personal credulity.
Paul's point about replacing neurons with computers isn't contrary to physicalism. His question is why this type of a physical explanation won't work?

If there is something special about biology or something intrinsic to humans that precludes computer based consciousness then you haven't as yet identified it. We only have, to date, a model that doesn't make biology requisite.
 
I have never claimed that there is a ghost in the machine. I've always said that it's for scientists to look for a physical explanation for consciousness.
But I think you need to be clear just what it is that be believe needs an explanation.

We already have an explanation for the complex processing that we observe about consciousness.

So what we would need to be clear about is, just what is the other thing that needs explaining?

(And note I am not saying that there is no other thing that needs explaining, just that if there is, then we need to be clear about what it is we are trying to explain before embarking on a quest for the answers).
 
Last edited:
David Chalmers is a case in point.

He refuses to state the problem clearly or give it any rigorous thought.

But instead he starts to categorise people who might think there isn't a problem according to letters of the alphabet.

That strategy seems to have worked well for him in terms of his career, but I don't think it is really a useful way of looking at the problem.

If there is one.
 
Let's back up.

Paul's point about replacing neurons with computers isn't contrary to physicalism. His question is why this type of a physical explanation won't work?

If there is something special about biology or something intrinsic to humans that precludes computer based consciousness then you haven't as yet identified it. We only have, to date, a model that doesn't make biology requisite.

You might want to make it even simpler and ask whether or not westprog thinks a manufactured biological neural network is capable of consciousness.

That is, if we had the technology to build a human from scratch, would the thing be conscious? If westprog is unwilling to admit that we should be able to replace neurons with other neurons and retain consciousness then you shouldn't even bother with the idea of replacing them with non-neurons.

Note that when I asked this exact question, the answer I got was "we already can create humans from scratch, and they are conscious." Not a very straight answer, but you might have more luck.
 
I'm getting a serious deja vu here. I swear we've had this kind of conversation with other people who were expressing doubt that computers can replace neurons, implying that there must be more to it, but wouldn't come out with an explanation for their doubt.

Perhaps Westprog just has a nagging, nonspecific doubt. That's certainly allowed. But we've been snookered before, so we're suspicious.

~~ Paul
 
I'm getting a serious deja vu here. I swear we've had this kind of conversation with other people who were expressing doubt that computers can replace neurons, implying that there must be more to it, but wouldn't come out with an explanation for their doubt.
:) I don't know if this is in reference to me or not. Perhaps it's just my vanity. I came here a dualist and that was my position.

ETA: Thinking back, yeah, it's just my vanity. This discussion has happened hundreds of times in the many years since I first came here.
 
Last edited:
Reduction to vitalism - property X of system Y is special because of substance Z, which we cannot account for but must be there because clearly system W would have property X otherwise.
 
But we've been snookered before, so we're suspicious.

Yep.

Thats why I say, ask if building a brain out of neurons will result in consciousness.

I have a suspicion it isn't actually the neuron-transistor swap that holds people up, but rather a fundamental doubt that humans should be able to understand their own consciousness. Or in other words, whether or not such knowledge should be restricted to God.
 
Yep.

Thats why I say, ask if building a brain out of neurons will result in consciousness.

I have a suspicion it isn't actually the neuron-transistor swap that holds people up, but rather a fundamental doubt that humans should be able to understand their own consciousness. Or in other words, whether or not such knowledge should be restricted to God.
Well no, I have my doubts about that too, but it is nothing religious.

I can put into my words my doubts that a computer running an algorithmic simulation of a brain would be conscious. Consider a massive computer that is running a program that simulates brain function and produces an emulation of human like behaviour.

Now can I consider that the computer is conscious? If it is running an algorithm then you could theoretically desk check this algorithm using pencil and paper - although it might take billions of years just to check a half second of consciousness (and an inordinate amount of paper).

But we would not consider that there was a brief half second of consciousness that occurred stretched out over a billenium (or I wouldn't anyway).

So the question is, why would running this process faster using silicon instead of pencil and paper make the difference?

So at this point I also wonder why replacing a set of fast instruction processing CPU's with a set of electronic components connected up like neurons in a brain would make a difference.

Then I wonder why replacing these transistors with meat neurons helps the process along.

So, no, I don't have a knock down argument - but I think at least grounds for thinking it is at least possible that we are missing something.

As Paul says, time will tell. If very complex brain like computers started producing consciousness like behaviour then I would move closer to functionalism.

If a computer simulating brain like function could pass a simple primary school comprehension test then I think it would be close to game set and match (and some interesting ethical problems).

But if sufficiently complex electronic simulations simply refused to produce the consciousness like behaviour - then we would at least have a framework for investigating why neurons behave in one way in real life, and another way in a simulation.
 
Now can I consider that the computer is conscious? If it is running an algorithm then you could theoretically desk check this algorithm using pencil and paper - although it might take billions of years just to check a half second of consciousness (and an inordinate amount of paper).
The Chinese Room Argument. I devised my own version once before I knew who Serle was. I don't think I could do the rebuttals justice. See Replies to the Chinese Room Argument.

I like Pinker's response best as it most closely resembles my possible solution. Falible human intuition and speed.

4.5 The Intuition Reply said:
Steven Pinker (1997) also holds that Searle relies on untutored intuitions. Pinker endorses the Churchlands' (1990) counterexample of an analogous thought experiment of waving a magnet and not generating light, noting that this outcome would not disprove Maxwell's theory that light consists of electromagnetic waves. Pinker holds that the key issue is speed: “The thought experiment slows down the waves to a range to which we humans no longer see them as light. By trusting our intuitions in the thought experiment, we falsely conclude that rapid waves cannot be light either. Similarly, Searle has slowed down the mental computations to a range in which we humans no longer think of it as understanding (since understanding is ordinarily much faster)” (94–95).

That said, we may very well be missing something.
 
Last edited:
Well no, I have my doubts about that too, but it is nothing religious.

I can put into my words my doubts that a computer running an algorithmic simulation of a brain would be conscious. Consider a massive computer that is running a program that simulates brain function and produces an emulation of human like behaviour.

Now can I consider that the computer is conscious? If it is running an algorithm then you could theoretically desk check this algorithm using pencil and paper - although it might take billions of years just to check a half second of consciousness (and an inordinate amount of paper).

But we would not consider that there was a brief half second of consciousness that occurred stretched out over a billenium (or I wouldn't anyway).

So the question is, why would running this process faster using silicon instead of pencil and paper make the difference?

So at this point I also wonder why replacing a set of fast instruction processing CPU's with a set of electronic components connected up like neurons in a brain would make a difference.

Then I wonder why replacing these transistors with meat neurons helps the process along.

So, no, I don't have a knock down argument - but I think at least grounds for thinking it is at least possible that we are missing something.

As Paul says, time will tell. If very complex brain like computers started producing consciousness like behaviour then I would move closer to functionalism.

If a computer simulating brain like function could pass a simple primary school comprehension test then I think it would be close to game set and match (and some interesting ethical problems).

But if sufficiently complex electronic simulations simply refused to produce the consciousness like behaviour - then we would at least have a framework for investigating why neurons behave in one way in real life, and another way in a simulation.

Exactly.

When we define consciousness as the ideal result of a material process then we end up with an material world defined ideally.

When we define consciousness as the material result of an ideal we end up with a ideal world defined materially.

We need to outgrow the limitations of our language.
 
Now can I consider that the computer is conscious? If it is running an algorithm then you could theoretically desk check this algorithm using pencil and paper - although it might take billions of years just to check a half second of consciousness (and an inordinate amount of paper).

But we would not consider that there was a brief half second of consciousness that occurred stretched out over a billenium (or I wouldn't anyway).
I would. Definitely.

As Paul says, time will tell. If very complex brain like computers started producing consciousness like behaviour then I would move closer to functionalism.

If a computer simulating brain like function could pass a simple primary school comprehension test then I think it would be close to game set and match (and some interesting ethical problems).

But if sufficiently complex electronic simulations simply refused to produce the consciousness like behaviour - then we would at least have a framework for investigating why neurons behave in one way in real life, and another way in a simulation.
My point here is that simple systems already display simple self-awareness. Why would there be anything to prevent complex systems from exhibiting complex self-awareness?
 
When we define consciousness as the ideal result of a material process then we end up with an material world defined ideally.
But we don't do that.

When we define consciousness as the material result of an ideal we end up with a ideal world defined materially.
We don't do that either.

Consciousness is a material result of a material process.

We need to outgrow the limitations of our language.
No, we just need to make sure our definitions actually mean something.
 
Why would there be anything to prevent complex systems from exhibiting complex self-awareness?

The point for me is not just the experience of self-awareness, but the ability to express this self-awareness so that other self-aware creatures find it meaningful.

Robin does not find a 1/2 second of consciousness over billions of years meaningful, you do.

So the question really is, what is meaning?
 

Back
Top Bottom