• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Has consciousness been fully explained?

Status
Not open for further replies.
I call BS. Show me the physical definition of "information".

[ETA: Not that there isn't one; I but I want to know how you can include "heat" as "information".]



I am not--and I think I'm not alone here--arguing that a computer has the same understanding of what's meaningful information as a human. Heck, two people quite often have different understanding about what's meaningful. What's your point?



What position are you arguing exactly? Sounds like you're ascribing subjective experience to the computer.

I think is an idiomatic defintion used only by Westprog, the common usages are

1.Knowledge derived from study, experience, or instruction.
2.Knowledge of specific events or situations that has been gathered or received by communication; intelligence or news. See Synonyms at knowledge.
3.A collection of facts or data: statistical information.
The act of informing or the condition of being informed; communication of knowledge: 4.Safety instructions are provided for the information of our passengers.
5.Computer Science Processed, stored, or transmitted data.
6.A numerical measure of the uncertainty of an experimental outcome.
7.Law A formal accusation of a crime made by a public officer rather than by grand jury indictment
 
Well, except there is a known mechanism, and we know that machines have subjective experiences, both theoretically and behaviourally and because we can look inside them and watch it happening.

I'm really not sure what more you want.

What you said would do just fine if it were true.

Does subjective experience just happen when a machine processes information? If that is your position, it sounds like a form of epiphenomenalism.
 
I think is an idiomatic defintion used only by Westprog, the common usages are

1.Knowledge derived from study, experience, or instruction.
2.Knowledge of specific events or situations that has been gathered or received by communication; intelligence or news. See Synonyms at knowledge.
3.A collection of facts or data: statistical information.
The act of informing or the condition of being informed; communication of knowledge: 4.Safety instructions are provided for the information of our passengers.
5.Computer Science Processed, stored, or transmitted data.
6.A numerical measure of the uncertainty of an experimental outcome.
7.Law A formal accusation of a crime made by a public officer rather than by grand jury indictment

I did provide a link, you know. And it didn't mention grand juries.
 
It's always been claimed by the Strong AI advocates that it's the programs running in the computers that produce the consciousness, not the totality of the physical processes.
You misunderstand, or are splitting hairs too many ways.

Strong AI advocates might claim that the program-running-on-a-computer might be conscious/produce consciousness, but it's absurd (from any point of view) to assert that a program-not-running-on-a-computer would do the same.

In this case, the sum total of all the physical processes (including the power supply generating power, the transistors switching, the capacitors capacitating [sic]) all allow for the program to run, and therefore for consciousness to obtain.

It sounds like you might be misinterpreting the notion of substrate independence.

Do you agree that IF consciousness could arise in a non-human computer, that the computer hardware and physical processes involved in its functioning allow it to happen?

It's never been claimed that arbitrary physical interactions produce consciousness, or can produce consciousness. I'm not constructing a strawman here.

Huh? Who said anything about arbitrary physical interactions?

More later. Gotta go for now.
 
But the things that a brain can do in principle are identical to the things that a computer can do in principle.

So if you accept that consciousness is produced by the brain, it necessarily follows that you accept that computers can be conscious.

I can accept that computers can be conscious. My addition to this argument is that computers cannot be conscious in the same way that humans are conscious because of the relationship between structure and function.

Computers are structured differently and so they function differently. This doesn't mean that they can't potentially function very similarly to a human, but the things a brain can do and the things a computer can do can't be identical. They can only be closely analogous.

I'm splitting hairs.

Technically, what a feline eye can do and a human eye can do is not identical. They both are sensitive to light and translate luminous signal patterns into representations of the world, but neither of them do it in exactly the same way.

If a human was able to have cat eyes surgically implanted, would it radically change her? To be sure, it would change her, but I imagine she'd function approximately in the same way.

If a human was able to have computer eyes surgically implanted, would it radically change her? I bet the change would be about the same as with the cat eyes, but now imagine that the computer eyes don't just emulate human vision but also see infared. It's such a little change, but has a LOT of ramifications. Just think about how having infared vision would affect your dating experiences: you'd instantly be able to tell if a person was turned on or not. :)

Now think about all the little changes between a human and an SAI android built from a mix of organic and inorganic parts. That both a human and theoretical android can play chess or write a novel or understand a joke doesn't mean they're identical. Chimpanzees and human children can learn sign language, but that doesn't mean they're identical. All are conscious, but none are conscious in the same way, nor is there consciousness generated in the same way.

That's the key: if the consciousness is generated in a different way, it's essentially different, regardless of whether it can perform the same tasks.

Why discern between consciousnesses regardless of whether they can do the same things? Because to really understand something you have to understand it on its own terms.

When you treat a child like an adult, because they're functionally so similar to adults, you often run into problems. But even within the same species at different stages of development you have different brain structures and therefore different functionality. The differences in funtionality may be unnoticeable, or seem meaningless, until you run up against it, such as in the case of trying to teach abstract ideas to pre-pubescents with immature frontal cortexes.

My girlfriend teaches 12 and 13 year olds and she's always running up against this problem and is forced to remember that they're not the same as adults.
 
You misunderstand, or are splitting hairs too many ways.

Strong AI advocates might claim that the program-running-on-a-computer might be conscious/produce consciousness, but it's absurd (from any point of view) to assert that a program-not-running-on-a-computer would do the same.

In this case, the sum total of all the physical processes (including the power supply generating power, the transistors switching, the capacitors capacitating [sic]) all allow for the program to run, and therefore for consciousness to obtain.

It sounds like you might be misinterpreting the notion of substrate independence.

Do you agree that IF consciousness could arise in a non-human computer, that the computer hardware and physical processes involved in its functioning allow it to happen?

Yes - but the claim is that the experience produced by the program is exactly the same regardless of the substrate. The only effect the physical interactions of the computer have is to allow the program to run. (Incidentally, there are many other physical interactions taking place on the computer which do not enable the program to run. They are independent of its operation).

Since we can switch on the computer and run trivial programs without generating consciousness - even though the physical interactions are almost identical - and that we can generate identical consciousness by running the same program on an entirely different physical framework - it seems to show that the physical operations of the system are largely divorced from the generation of consciousness.

If Strong AI is a physical theory, then it needs to show the physical differences between the disposable operations, and the critical operations.

Huh? Who said anything about arbitrary physical interactions?

More later. Gotta go for now.

It's a claim of Strong AI that any physical interactions will serve to implement the software which becomes conscious. Marking out rows or pebbles on the beach, or putting cards in slots will produce the exact same conscious experience as running the program on a computer.
 
Here was my stab at definitions before when the question was asked as to the physical definition of a computation:

physical process: a deterministic or a random process where each state's measurement does not necessarily have a precise symbolic representation

computation: a deterministic process where each state's measurement can have a precise symbolic representation.​
 
Your strawman is turning into a pile of straw-powder. No one is claiming that consciousness is information. Everyone agrees that consciousness is a process.

Ok, great. So maybe you can answer the question I put to PixyMisa: If the output of IP is information, and consciousness is not information, then how is it possible for IP alone to generate consciousness?
 
Ok, great. So maybe you can answer the question I put to PixyMisa: If the output of IP is information, and consciousness is not information, then how is it possible for IP alone to generate consciousness?

From what I understand, he is saying that consciousness is a process. What process? The one you are doing when you process information.

You are conscious when information is being processed, you are not when it is not.

I think consciousness is supposed to be a verb from his (and my) view.
 
You are conscious when information is being processed, you are not when it is not.

This is not accurate, tho. Information is being processed all the time. We are conscious of some of it, not conscious of the rest.
 
I'll ask again: what is information processing an abstraction OF?

Btw, if you provide an example of IP, then the question should be easy to answer.

To tease out the examples I gave of other types of abstractions....

"My cousin marrying his girlfriend" is an abstraction of the entire set of phsyical motions involved in the ceremony. Or, you could say it's an abstraction only of the actual rites.

The Atlanta Braves winning a baseball game is an abstraction of the entire set of physical motions making up the game.

Let's take a situation in which you ask me "What's two plus two?" and I answer "Four."

"Adding two and two to make four" or simply "addition" is an abstraction of the chain of neural events by which sound waves are converted to neural signals which set off a chain reaction based on associations built up over time in my brain, strengthening certain patterns and weakening others, resulting in a cluster of interrelated patterns -- encompassing everything from (also abstractions, here) a desire to provide a correct answer to addition tables -- that ends up with impulses moving my speech organs.

We don't need to invoke "symbols" to describe how this works -- it's just the only way to really deal with it if we're going to talk about it, but it's an idealization, not a description of what the brain is actually doing physically -- and there's no actual addition happening in OPR.

For 2+2=4 to occur nonabstractly in OPR, I'd have to take two rocks, for instance, and put them in a bucket, then put two more rocks in. That would be a true instance of real-world addition, even if nobody bothered to think, "Hey, I just added two and two to get four".
 
I can accept that computers can be conscious. My addition to this argument is that computers cannot be conscious in the same way that humans are conscious because of the relationship between structure and function.
Except that your addition is contradicted by the Church-Turing thesis.

That's the key: if the consciousness is generated in a different way, it's essentially different, regardless of whether it can perform the same tasks.
Again, it can be different, but it's mathematically proven that it can be the same.

My girlfriend teaches 12 and 13 year olds and she's always running up against this problem and is forced to remember that they're not the same as adults.
Hmm. By the age of 12-13 children prety much are adults in the way they think; the last major change comes at the age of 10 to 12. They're just very young and inexperienced adults. Well, as far as I am aware of the literature; I'm just an interested layman here, not a neurologist or psychologist.
 
What you said would do just fine if it were true.

Does subjective experience just happen when a machine processes information?
Not the way I define it. As I said, experience requires self-reference, that is, you not only have to process the data, you have to process the processing of the data.

If that is your position, it sounds like a form of epiphenomenalism.
No; experience is additional information, which clearly can have a causal effect - though that causal effect is not necessarily what it appears to be (as in Libet's experiments).

Pure epiphenomenalism makes no sense anyway.
 
Now, is entropy a form of information processing?

Interestingly enough, the mathematical equations describing entropy are the same as the mathematical equations describing information. A high entropy state (massive disorder) corresponds to high information content while a low entropy state (everything very orderly and symmetrical) corresponds to low information content.

You are conscious when information is being processed, you are not when it is not.

I think consciousness is supposed to be a verb from his (and my) view.

Even if someone agrees with the idea that consciousness equates to SRIP, it seems fairly clear that not all SRIP produces consciousness. We are still left with the mystery of why brains produce consciousness and computers apparently don't. Claiming that it is the size/complexity of the brain compared to our current computers is one possible answer, but I don't think the matter can be considered solved at this point.
 
But Ramachandran is explaining self-awareness with mirror neurons.
No, he's explaining self-awareness with mirror neurons acting self-referentially.

The prime function of mirror neurons is, broadly, empathy - the whole point is that they fire when you engage in a particular behaviour, and also when you observe someone else engaging in a particular behaviour.

Ramachandran's point is that when mirror neurons are trained on your own brain, they produce the exact sort of self-reference we're looking for.

As he says:

Ramachandran said:
[FONT=Verdana, Arial, Helvetica, sans-serif]I claim no great originality for these ideas; they are part of the current zeitgeist. Any novelty derives from the manner in which I shall marshall the evidence from physiology and from our own work in neurology.[/FONT]
[FONT=Verdana, Arial, Helvetica, sans-serif]

What he's saying is, We already know brains produce consciousness; it's quite clear that to do this the neural network has to loop back and examine its own activity. And these are the structures and neurons that I think are the proximate physical engine of this activity.
[/FONT]
That is the sort of physical mechanism I am looking for. You are just saying that self-awareness is a property of SRIPs, and I am saying that only SRIPs with a mechanism akin to mirror neurons are self-aware.
And Ramachandran is saying the opposite - that it self-reference that's the key, and the mirror neurons are the mechanism for the self-reference.
 
Last edited:
Even if someone agrees with the idea that consciousness equates to SRIP, it seems fairly clear that not all SRIP produces consciousness. We are still left with the mystery of why brains produce consciousness and computers apparently don't.
Not much of a mystery. They do.
 
While I think he gives too much credence to the notion of qualia (indeed, I think any credence at all is too much), this article highlights the fact that failure modes are often the key to finding out how systems work. And that the particular (and peculiar) failure modes of consciousness and self are slam-dunk proof that it's a product of brain activity.
 
Status
Not open for further replies.

Back
Top Bottom