• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Explain consciousness to the layman.

Status
Not open for further replies.
Why don't we just skip all that and go to an even simpler case: building a human.

Suppose we have an awesome machine that allows us to build humans from the ground up.
Starting with a very detailed blueprint ( perhaps DNA ? ) we just figure out where to place molecules and eventually we have a human, or at least a set of molecules that perfectly matches the human specified in the blueprint.

The human has been constructed by someone else ( us, the machine, and the guy who wrote the blueprint ).

Is this human somehow different from you and I? If so, why?




Conflating Science FICTION with reality again.....

No one built humans or animals.... no one will ever do so..... if we build something that for all appearances looks like an animal.... it won't be an animal.... it would be an EMULATION of an animal.

Animals were not built.... they EVOLVED over billions of years of physical material interaction with aeons of mutation curtailed by environmental constraints and millions of years of feedback loops of actions and reactions that were not directed by any engineer or CONSCIOUS DESIGNER with any plan or intent or goals.
 
Last edited:
Well, you'll have to explain that one.

Why isn't it, and if it isn't, then what's your definition for what is?

I am only aware of two people who hold the opinion that "information processing" is constantly going on in rocks: you and westprog.

Frankly that isn't a big enough crowd to bother with, especially given what I know about you two. It would be an utterly pointless exercise to try and explain it further.

The rest of the discussion participants know full well what is meant by "information processing" since it is obvious to most people how things like neurons and transistors -- stuff that makes up systems that processes information -- are so different from everything else in the universe.

To say that there is nothing in common between integrating devices like neurons and transistors that isn't also shared by everything else is an absurd proposition, since we can't make computers out of anything besides integrating devices like transistors and all brains are made of integrating devices like neurons.

I understand you claim to want a definition, but I suspect any definition given is just going to be discounted because of your ulterior motives, and that what you really want is just another long drawn out post war where you get to rant about how stupid and incomprehensible the computationalist position is. So what is the point? Everyone else understands what is being talked about here.
 
Conflating Science FICTION with reality again.....

No one built humans or animals.... no one will ever do so..... if we build something that for all appearances looks like an animal.... it won't be an animal.... it would be an EMULATION of an animal.

Animals were not built.... they EVOLVED over billions of years of physical material interaction with millions of years of mutation curtailed by environmental constraints and millions of years of feedback loops of actions and reactions that were not directed by any engineer or CONSCIOUS DESIGNER with any plan or intent or goals.

Well if you would rather, I can just ask "how does something that starts as a single cell end up being conscious?"

If you want nature + growth to be the builder then fine -- but the result is the same.

You have a set of molecules that was constructed little by little according to a blueprint and it ends up being conscious.

I fail to see why adding to the set "little by little" is so fundamentally different than, say, making each half separately and just welding them together.
 
Weight is, of course, depends on mass and gravity.

But what you seem to be saying here is that the addition of more molecules of ice to the mass causes a change in the interaction between the surface particles of the ice and the branch alone, which causes the particles in another part of the tree to sever their bonds.

No, not just those particles alone. I only focused on that set of particles because it is the most obvious -- if the surface of the ice bonded to the branch isn't exerting more force, the branch wouldn't break.

Obviously if a single extra ice molecule lands on the end of the branch, every particle interaction in the entire system is going to be slightly disturbed.

The fact is, though, you can model the system that's responsible for the breaking branch without any reference to the activity of any specific particles.

Correct.

You can also be 100% certain that if your model includes an accurate model of every single particle and all their interactions, you are modeling the system that is responsible for breaking the branch.

Do you disagree?
 
Are you sure there is absolutely no self-referential activity going on in that rock?

Do not the particles in the rock get feedback from other particles which are getting feedback from them?

Is that not self-referential?

Please repost the definition of self-referential when it finally appears. I've been waiting a long time for it.

I'd like an example of any physical object that doesn't get physical feedback from its environment that defines itself.
 
Yeah, how long now have we been waiting for this Goldilocks definition of "information", which never seems to appear?

I've counted about seven undefined terms embedded in SRIP - and it's the precise definition of the terms which provides the explanation.
 
As has been explained, transistors and neurons are similar in certain ways, but extremely different in other ways (which is why we have different names for them) and we don't yet know which features of the brain are involved in the performance of experience.

If "everything going on in your brain when you're not dead" is "information", then we're back to stars, oceans, and piles of grass also being information processors.

So in that case "information" is not a particularly useful term, and certainly not one that draws any special relationship between conscious brains and the machines we call computers.

In certain ways, neurons and transistors are similar. In others, they are massively different. Neurons, for example, are made up of living cells, much like the pile of grass. Transistors are made up of contaminated silicon, much like the sand that the ocean washes up against. It's certainly not the case that they are more like each other than they are like anything else in the universe.
 
It's useless for studying consciousness, and I have a hard time picturing what other uses a definition of consciousness might be put to.

Helping support a particular philosophical agenda, for example.
 
There's something about this that is off. First, doing a thing and being able to do a thing are not the same thing, but you're confusing the two in the above paragraph. Second, we quite often perceive things without knowing that we perceive them.

We often describe perception that takes place without knowing that we perceive it as unconscious.
 
How does Dolly fit into this?

Dolly was made from living genetic material, much as you or I. The location where the mixing took place was slightly different.

I don't necessarily agree that artificial life would be in principle impossible. It's just that it's so much easier to start with living cells and change 'em around than to build one from scratch, so why bother?
 
Likewise, if one had the knowledge, but lacked faith in their knowledge, they might say to themselves "math says that for sure you can reach 92983928493948234829834983492384 by adding 1 to a number repeatedly, but that is such a big number -- maybe math is wrong when it comes to such big numbers?"
Coincidentally, I just watched a Horizon program about infinity that I had recorded, and there was a mathematician who was basically suggesting exactly that; that there is some largest number which rolls over to 0 when you add one to it. He said this number would be far larger than Graham's Number, but it appears that he decided that it exists based on his dislike ('repulsion') of infinity... (unless the program producers edited out some more mathematical explanation).
 
Why don't we just skip all that and go to an even simpler case: building a human.

Suppose we have an awesome machine that allows us to build humans from the ground up.

Starting with a very detailed blueprint ( perhaps DNA ? ) we just figure out where to place molecules and eventually we have a human, or at least a set of molecules that perfectly matches the human specified in the blueprint.

The human has been constructed by someone else ( us, the machine, and the guy who wrote the blueprint ).

Is this human somehow different from you and I? If so, why?

I don't see any difference, no. So consciousness can be a property of something that gets constructed from a blue print. I get that and I get it that is essentially how we are in fact made, the designer being several hundred million years of evolution. So the fact I had to build the toaster before it acquired its intelligence is no big deal. I agree with you on that.

What about the guy inside the sealed room with input and output slot for Chinese-only symbols? Or rather, what about the whole system which happens to have a guy inside it but actually could have anything capable of receiving and passing out symbols according to rules. It would not be conscious in our thought experiment but could be indistinguishable to an outside from something that was conscious (like a Chinese speaking human being).

Maybe the consciousness is in the programmer? Strike that for now. Just have a crack at the Chinese symbol-producing machine.
 
I am only aware of two people who hold the opinion that "information processing" is constantly going on in rocks: you and westprog.

Frankly that isn't a big enough crowd to bother with, especially given what I know about you two. It would be an utterly pointless exercise to try and explain it further.

The rest of the discussion participants know full well what is meant by "information processing" since it is obvious to most people how things like neurons and transistors -- stuff that makes up systems that processes information -- are so different from everything else in the universe.

"It's obvious to most people" that God exists.

Do you really want to go there?

To say that there is nothing in common between integrating devices like neurons and transistors that isn't also shared by everything else is an absurd proposition, since we can't make computers out of anything besides integrating devices like transistors and all brains are made of integrating devices like neurons.

Didn't say that.

What I said was, there are differences, they are not the same.

If you want to invoke "information processing" as a link between brains and computers which allows us to draw conclusions from that link, you're going to have to say what it is.

The observation that we can't make computers without transistors, and transistors are in some ways similar to neurons, is not evidence that computers can be conscious.

As far as I can tell, you can either define information processing as the kind of thing computers (and abacuses, and some other objects) do for us, which is a symbolic process requiring both a machine and a programmer/reader, or you can define it as something which happens in every physical interaction in the universe.

If you have another definition which actually works, let's hear it.

I understand you claim to want a definition, but I suspect any definition given is just going to be discounted because of your ulterior motives, and that what you really want is just another long drawn out post war where you get to rant about how stupid and incomprehensible the computationalist position is. So what is the point? Everyone else understands what is being talked about here.

If everyone understands it, why can't it be articulated?

I also suspect that any definition you give will fail, but that is not because of any motive on my part (I have no dog in the race... computational literalism would be way cool if true!) but because your premises are flawed.

So you can justify your dodge any way you like, but we still haven't seen any coherent definition of "information processing" which makes the SRIP definition of consciousness parse out.
 
What about the guy inside the sealed room with input and output slot for Chinese-only symbols? Or rather, what about the whole system which happens to have a guy inside it but actually could have anything capable of receiving and passing out symbols according to rules. It would not be conscious in our thought experiment but could be indistinguishable to an outside from something that was conscious (like a Chinese speaking human being).

Maybe the consciousness is in the programmer? Strike that for now. Just have a crack at the Chinese symbol-producing machine.

No, the chinese room is not conscious and yes, from the outside it would be impossible to tell.

However this isn't relevant to anyone except philosophy professors. We know for a fact that programming a chinese room would take exponentially more resources than simply programming a machine that can learn meaning by itself. Furthermore, the programmers themselves certainly know the approach they took.

So if I program a robot to learn, and it comes back a year later and has a conversation in chinese with me, I can safely assume that it is not a chinese room. I can safely assume that it genuinely understands chinese because that is by far the lowest cost approach to having a turing equivalent conversation in chinese, especially since I didn't program it to be a chinese room.

Not only that, but if it talks about stuff like meeting this woman and falling in love with her, or how pretty the sunset was in some part of China on this one night, or being afraid that the Chinese government would put it in jail for speaking out about human rights abuses, or even that it almost committed suicide with the rest of the workers at foxconn, I think it is a safe assumption that it is conscious. How else would it be aware of such deep meaning behind the conversation? A lookup table? Nah.

This is the same kind of argument that people apply to evolution to explain how our consciousness arose in the first place. It is simply too expensive to have a gigantic lookup table, or some other type of super-organized structure, dictating behavior. It is far cheaper, and thus far more useful for an organism trying to survive in a dynamic world, to be able to learn-->understand-->react. If evolution had lead to us being chinese rooms it would fly in the face of everything we know.

Does that make sense?
 
Last edited:
"It's obvious to most people" that God exists.

Not the majority of people participating in this thread.

As far as I can tell, you can either define information processing as the kind of thing computers (and abacuses, and some other objects) do for us, which is a symbolic process requiring both a machine and a programmer/reader, or you can define it as something which happens in every physical interaction in the universe.

I think yy2bggggs is doing a good job of arguing with your notions in this context, I don't need to add my 2cents, it will just confuse the discussion.
 
You can also be 100% certain that if your model includes an accurate model of every single particle and all their interactions, you are modeling the system that is responsible for breaking the branch.

Do you disagree?

Well, you're certainly modeling it. The accuracy of the model is a different question.

What you want me to imagine is a system in which, simultaneously, all the particles move to new locations, and yet each particle behaves exactly as if it had not been moved.

It's very difficult (perhaps impossible) to imagine -- even given a magic machine -- what it would mean for those particles to be in another location if they are behaving in every respect as if they were not.

The thing is, if you press "pause", then scatter the particles in the system -- let's stick w/ the ice on the branch -- then keep them magically connected and press "start" again, since this is a physical and not logical system, there's no guarantee at all that the system will behave the same even if you leave the particles magically connected.

How do you preserve macro-level phenomena such as weight, pressure, or spark?

I mean, with Einstein's elevator or Sagan's dragon, it's clear up front what it's about and where it's going.

Can't we devise a different thought experiment here, b/c I don't see where this one's headed, and I obviously don't agree with you about what the results would be anyway.
 
I think yy2bggggs is doing a good job of arguing with your notions in this context, I don't need to add my 2cents, it will just confuse the discussion.

Well, since yy2 defines a brain as a "symbol system" I'm afraid that's a dead end there.
 
If you have another definition which actually works, let's hear it.

If everyone understands it, why can't it be articulated?

I also suspect that any definition you give will fail, but that is not because of any motive on my part (I have no dog in the race... computational literalism would be way cool if true!) but because your premises are flawed.

So you can justify your dodge any way you like, but we still haven't seen any coherent definition of "information processing" which makes the SRIP definition of consciousness parse out.

I think that more time has been spent explaining that there's no point in giving the satisfactory definitions because of the motives of the people objecting to them than in just giving the definitions.

Even though some people might have a sinister agenda opposed to computer consciousness for some reason (probably something to do with God), there will be people reading this who have open minds, and presented with a precise definition of what, in SRIP, is meant by self, and referential, and self-referential, and information, and processing, and information processing, will realise that the computational analysis is obviously correct.
 
Can't we devise a different thought experiment here, b/c I don't see where this one's headed, and I obviously don't agree with you about what the results would be anyway.

No, I am done with the whole exercise.

I only wanted to point out your incorrect assertion that there are attributes of systems which don't reduce to the way the particles behave. There are no such attributes.
 
Status
Not open for further replies.

Back
Top Bottom