• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Has consciousness been fully explained?

Status
Not open for further replies.
I don’t think they are conscious due to the general consensus. Ultimately I really don’t know. I’m simply pointing out that even if they were conscious, without us knowing, most of us would not accept them to be conscious – that’s the dilemma with using a simple “have/have-not” dichotomy regarding this issue.
True.
I wonder what the minimal level of structural and behavioral complexity a system must display for us (the general populous), so that we can consider it to be conscious (or have conscious aspects)? Or, should we start to consider the issue in terms of scale rather than a matter of have/have-not altogether?

I agree that a scale would be better. I agree with bluskool that one end would be zero consciousness.
 
PixyMisa, I'm sorry I got off on another tangent there.

I will try to behave and stay focused on the question.

So let's back up, if you don't mind.

The question I posed was essentially this:

What do you mean when you say "The brain is a computer"? And what is your evidence that this is so?

Breaking it down even further, let's first just focus on the former part there, because I want to make sure I understand what it is you're saying.

But before we dive into that, please allow me to give you a peek into my own thinking, so that you can form your answer in a way that I understand.

Ok.... From a man-on-the-street point of view, when you say "your brain is a computer", the first thing I think of is this: You're saying that my brain and the PC on my desk here are both the same kind of thing, in a way.

My PC is a computer (no one would deny that), and my brain is also a computer (quite a few people would deny that, but this doesn't mean it's not true).

Now, when I hear that, I assume that if this is true then the two must be similar functionally or structurally, if they are to belong to the same category of thing.

For example, a French press and a Mr. Coffee are both coffee makers.

Structurally, they are not similar at all. One is a cylinder with a metal mesh disc that is pressed down into it. The other is an electronic device that heats water and drips it down through a plastic cup with holes in it. But they both make coffee, so they are functionally identical.

On the other hand, my house and my office are both buildings.

Functionally, they are different. But they both have a foundation, walls, doors, windows, a roof, wiring, plumbing, and insulation, so they are structurally similar.

However, when I compare my brain and my PC, both the structural and functional similarities appear tenuous at best.

Structurally, my brain is a mass of constantly re-configuring organic neurons arranged into larger structures such as a medulla, a hippocampus, temporal lobes, and so forth. My computer is a mass of wires and various electronic components with a mother board and a disc reader and things like that. The components are different, their arrangements are different. They are not structurally equivalent.

Functionally, they are also quite different. Yes, there's some overlap, but not much. Computers are excellent at doing some things that the brain is ok at, and can do many things the human brain can't, and vice versa. My brain can't perform highly complex math at high speed, print charts and graphs, run simulations of moon missions, or play CDs. My PC can't watch someone demonstrate how to pitch a tent and learn from that how to pitch just about any tent, or engage in casual conversation on novel topics with strangers, or get frightened while watching a scary movie.

Given all that, what does it actually mean when you say "Your brain is a computer"?

We can get to the proof once I understand what exactly you mean by that statement.
 
I can identify different outcomes without necessarily being able to identify what is different about the processes that led to the different outcomes.
Okay, sure. You distinguish between conscious and non-conscious systems behaviourally.

So, what are these behavioural differences that tell you that animals are conscious and computers aren't?

Just because I can categorize things as conscious and non-conscious, it doesn't mean I can identify what is different about the self-referential information processing which leads to consciousness, compared to that which doesn't.
Not immediately, perhaps, but it's the necessary first step.
 
You're right about decision making (although the kind of decision making we typically associate with consciousness is significantly more complex than that of rudimentary SRIPs).

I disagree that all SRIPs have self-awareness, though. I actually can't think of any SRIP besides a brain that has a mechanism for self-awareness (I am not really sure that all brains even have such a mechanism). Unless you are equating self-aware with self-referential.
A self-referential information processing system is self-aware by definition. I can't see any sensible definition of the term "self-aware" that could exclude it.

Sensation on the other hand is tricky. Yes, all SRIPs have some mechanism for taking in information from the environment, so maybe that could be said to be equivalent to a sense organ, but do all SRIPs feel the sensation?
Yes. Yes they do.

Take a Roomba for instance. If a Roomba could talk could it tell you what it feels like to sense and avoid objects in its environment? Is it like anything for it to sense and avoid objects in its environment? I don't think it is because I can't think of any mechanism that it possesses that would enable this. The Roomba just seems to go from sense to action without any need to experience and without anything like a neural network that could give rise to experiencing the sensation.
But this is precisely what self-reference gives you - not just state information from a sensor, but access to what it is like to receive that state information.
 
I have a good feeling that the debate on consciousness will soon become the new debate on evolution. No matter what scientific explanation is given for it, people will still believe there is "something more" to its existence.

It's not enough for people to believe that we all began as single-celled organisms, which have evolved into the diversity of life we have today; they want to believe in more.

It will be the same when people are told that consciousness is a specific array of neural and chemical interactions in the brain. The scientific explanation just isn't enough.
 
Good!


Well, no. I don't know about "most people", and don't care, but these points have been clearly established.
We agree again on the 'you don't care' part.

If these self-referencing information processing systems don't have sensations, what is it that they are acting and reporting on? What is it that leads to them making a decision one way rather than another?
Examine yourself, and presumably any lifeform. Do you 'feel pain' and react, or do you just do as a correctly programmed computer might do 'react as if it was feeling pain (which of course it isn't feeling)? What you 'feel' and the machine doesn't, is sensation.

As for self-aware, that's bleedin' obvious. They're self-referential, they process informtion, they can examine their own internal state and processes. That's what self-awareness is.
Once you define consciousness out of the system, we agree again.

SRIP has not been demonstrated to equal either self-awareness (in the animal sense) or consciousness.
 
So why does just that information transfer produce consciousness, and not the other vast amount of information transfer going on in the brain (and any other physical system)?

I don't beleive I just called it solely 'information processing' at any point. I have discussed that we know some areas of consciousness reasonably well, others not so much.
 
PixyMisa, I'm sorry I got off on another tangent there.

I will try to behave and stay focused on the question.

So let's back up, if you don't mind.

The question I posed was essentially this:

What do you mean when you say "The brain is a computer"? And what is your evidence that this is so?
The brain is an information processing engine. A computer. As I said earlier, it's a packet-switched pulse-coded chemical-biased network processor.

That's simply what it does. You can trace the activity from sensory nerves firing in the retina through the visual cortex and all over the brain as the response to what you are looking at is processed in various ways. At every step, what is happening is computation.

The brain IS a computer.

Ok.... From a man-on-the-street point of view, when you say "your brain is a computer", the first thing I think of is this: You're saying that my brain and the PC on my desk here are both the same kind of thing, in a way.
Yes, in the broad definition of computers.

My PC is a computer (no one would deny that), and my brain is also a computer (quite a few people would deny that, but this doesn't mean it's not true).
Yep.

Now, when I hear that, I assume that if this is true then the two must be similar functionally or structurally, if they are to belong to the same category of thing.
Functionally, yes. Structurally, not at all. The physical structure of computers is almost infinitely malleable. As long as you can build some sort of switch, and join those switches together, you can build a computer.

You can built it out of brass gears and cams, or from an Erector set, or relays, or vacuum tubes, or tiny channels in glass with liquid running through them, or transistors, or living cells. (All of these have been done, by the way.)

For example, a French press and a Mr. Coffee are both coffee makers.

Structurally, they are not similar at all. One is a cylinder with a metal mesh disc that is pressed down into it. The other is an electronic device that heats water and drips it down through a plastic cup with holes in it. But they both make coffee, so they are functionally identical.
Right.

On the other hand, my house and my office are both buildings.

Functionally, they are different. But they both have a foundation, walls, doors, windows, a roof, wiring, plumbing, and insulation, so they are structurally similar.
Functionally they are similar too - they offer protection from the elements and some degree of privacy and security from other people.

However, when I compare my brain and my PC, both the structural and functional similarities appear tenuous at best.

Structurally, my brain is a mass of constantly re-configuring organic neurons arranged into larger structures such as a medulla, a hippocampus, temporal lobes, and so forth.
Right. The important thing to remember is that neurons are switches. They receive an input, and they send out an output, depending possibly on other internal or external conditions.

My computer is a mass of wires and various electronic components with a mother board and a disc reader and things like that. The components are different, their arrangements are different. They are not structurally equivalent.
Right. But the computing part of your computer is the CPU, which is a network of transistors, which are switches, just as the neurons in your brain are switches. A signal comes in, and the transistor - depending on the conditions - will switch a second signal on or off.

Functionally, they are also quite different. Yes, there's some overlap, but not much. Computers are excellent at doing some things that the brain is ok at, and can do many things the human brain can't, and vice versa. My brain can't perform highly complex math at high speed, print charts and graphs, run simulations of moon missions, or play CDs. My PC can't watch someone demonstrate how to pitch a tent and learn from that how to pitch just about any tent, or engage in casual conversation on novel topics with strangers, or get frightened while watching a scary movie.
Okay, let's run through these.

Your brain is much better at maths than you might think. It's so good that you can analyse visual input and manipulate your limbs to catch a moving ball in an intense gravitational field in real time.

Computers, in themselves, can't print anything, or play CDs. They use external components to do that, the equivalent of your sensory organs and your limbs and so on.

The computer simulating a moon mission is really just an instance of solving a two-body problem in a gravity field. It's the same as you catching that ball.

Your computer may not be set up to learn how to pitch a tent, but it's certainly set up to learn the difference between spam and real email. It's the same thing, just adapted to a different role.

Your computer does in fact engage in casual conversation with strangers all the time - any time you plug in a USB device or attach something to the network, off they go and chat. And don't even get started on Wifi...

As for getting frightened - make sure your anti-virus is active, and then download a virus. Your computer will jump like it's been goosed. Virtually speaking, of course.

Given all that, what does it actually mean when you say "Your brain is a computer"?
It means your brain is a computer. Really, truly, they are doing the same things.

The brain is wired as an inference engine; this is why we're useless blobs for the first ten or twelve years of our lives, because were absorbing information from every source and forming, testing, and discarding inferences. Once you've built up a useful set of predictive rules about the world, this behaviour slows down significantly, which is why some things are much easier to learn as a child (language, most notably).

We deliberately build computers that behave differentlybecause we already have plenty of people and we want something that can think, but is good at thinking in precisely the areas we are bad at it. So we build computers that are as rigorous and deterministic and pre-defined as we can.

Also, having to wait twelve years for your PC to boot would suck.

But brains are still computers, and computers, brains.

We can get to the proof once I understand what exactly you mean by that statement.
I hope that helps.
 
I don't beleive I just called it solely 'information processing' at any point. I have discussed that we know some areas of consciousness reasonably well, others not so much.
The answer to Westprog's question is switches, but when you say that word in his presence any semblance of intelligent conversation ceases. I suspect it's some sort of memetic allergy.
 
It might be if you could distinguish if consciousness was present in the machines performing the computations that produced symphonies or sonnets, or provided scientific analysis. It's likely machines without consciousness can be programmed by conscious humans to arrive at the results mentioned.

Infinite monkeys, and so on. Random sounds can sound like music.
 
'Consciousness'...

Yes...It has been explained...

A living being is conscious and alive ...until it is dead...

Consciousness totally ends when you are dead...

There is no regaining your consciousness when you are 100% dead...

I am here now...and i wont be when i am dead..

It ain't exactly 'rocket science'...

DB
 
Last edited:
And why do you think that, for example, eating lunch is more similar to the process of consciousness than is the composition of a sonnet? That would seem, both prima facie and upon deep consideration, to be utterly absurd.

As I understand it, it has been an interesting connumdrum of artificial intelligence studies that it is far harder to get a computer to do something along the lines of 'tie your shoelaces' than it is to program a computer to dispense legal or medical advice. I don't know much about sonnets, but my hubby once wrote a very simple program to compose Haikus. Some were quite delightful. But he wasn't claiming that the program composing the poetry conscious. It was just a simple algorithm that combined random words from various lists in a specified order that met specified criteria.

Anyway, my point is that I find it quite believable that the process of eating lunch might be more representative of consciousness than composing poetry, particularly if 'eating lunch' includes such things as deciding what to eat, obtaining the ingrediants, preparing it ahead of time, etc. and is not referring to just the actions of fork picking up food, delivering to mouth and chewing.
 
The answer to Westprog's question is switches, but when you say that word in his presence any semblance of intelligent conversation ceases. I suspect it's some sort of memetic allergy.
Rather like others here who when asked to discuss consciousness discuss everything but consciousness.

Of course the good news is that we are, hopefully, millenia away from buiding a non-biologic machine that is conscious in a human (or super-human) sense, since it should take that machine a few nanoseconds to plan the demise of it's competitor, humanity.
 
All these long worded/smarta** replies...

When 99.9999% of people here already know the answer...

The answer is 'WE ARE'...until 'WE ARE NOT'...

Seriously...

You lot can keep on debating the bloody obvious...

I like to debate things that ain't obvious...

DB
 
PixyMisa said:
I can identify different outcomes without necessarily being able to identify what is different about the processes that led to the different outcomes.
Okay, sure. You distinguish between conscious and non-conscious systems behaviourally.

So, what are these behavioural differences that tell you that animals are conscious and computers aren't?
A good question. I suspect that DD's medical definition would be able to do so, but it's fairly specific to humans/animals.

I would say that conscious entities can come up with new solutions to old problems - i.e. they can invent and impliment entirely new responses to the same or similar stimuli.

Just because I can categorize things as conscious and non-conscious, it doesn't mean I can identify what is different about the self-referential information processing which leads to consciousness, compared to that which doesn't.
Not immediately, perhaps, but it's the necessary first step.

It would certainly be a required step if you want to demonstrate the SRIP's can be conscious.
 
Sorry, you're dead wrong in every respect. The Turing machine is a universal computer. This is what the Church-Turing thesis is all about:

Fancy that because I thought , weird , that is not I remembered, so I went onto the wiki for turing machine :

Limitations of Turing machines

And that is exactly how I read what I said. The initial turing machine did have some limitation but other model expanded on it, which were shown to be equivalent to turing.

I may be misinterpreting this, but to say I am dead wrong is quite an exageration. The initial model did have those limitation.

Anyway , it is quite irrelevant, the point is that concurrency is not a limitation as put earlier by piggy.
 
A self-referential information processing system is self-aware by definition. I can't see any sensible definition of the term "self-aware" that could exclude it.

Can you explain? I don't think that self-referential and self-aware are equivalent.

But this is precisely what self-reference gives you - not just state information from a sensor, but access to what it is like to receive that state information.

How do we know that? It's not intuitively obvious that a system needs to experience a sensation in order to process it. Heck, it seems to me that we even process some sensory information without experiencing it.

I have a good feeling that the debate on consciousness will soon become the new debate on evolution. No matter what scientific explanation is given for it, people will still believe there is "something more" to its existence.

This is why Dennett refers to people like Searle as a "mind creationist."
 
All these long worded/smarta** replies...

When 99.9999% of people here already know the answer...

The answer is 'WE ARE'...until 'WE ARE NOT'...

Seriously...

You lot can keep on debating the bloody obvious...

I like to debate things that ain't obvious...

DB

Be gone with your equivocation.
 
Status
Not open for further replies.

Back
Top Bottom