• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Has consciousness been fully explained?

Status
Not open for further replies.
I don't know where you got that idea. I said I think it may be not be possible to perfectly duplicate the brain. The duplication your thought experiment postulates would be substantially more precise than DNA.

I'll drop the duplication thing. It's not needed for this argument. Rather we can focus on how we can't tell whether our reality is simulated or not....

Simulation is a different question than duplication. Simulation is easier. What do you mean by metaphysical? Thinking is going on in the brain. Is that metaphysical?

Meta-physical means outside physics (e.g. 'mundane' reality) like a soul or magic. I ask because that kind of makes the conversation pointless if one tosses in things that can't be detected or measured yet affect the real world in some ineffable way.

Yes. Write in Spanish and I won't be able to understand you. So? Both RISC and CISC code are both information are they not? The question is how can we objectively distinguish between information and patterns that are not information? Your computer example doesn't work as it can't distinguish any information that isn't in the proper language. I, on the other hand, can easily tell that Spanish writing is information, even if I can't interpret it. But I can't infallibly tell what is information and what is not.

You can't read spanish. So you can't TELL that something that looks like spanish is actual information. You can just tell it looks like spanish. Plenty of none-sense "languages" that don't have any information (by that I mean they aren't actually a language, mind you).

However, your reading of English or the like isn't what is the processing of information in the brain. The brain doesn't take in words, sensations, sounds or the like. It takes in signals sent along nerves. Signals that have to follow particular rules regarding shape and form otherwise errors and other bad stuff will happen to the brain (or nothing might be sent).

Again, you look at how the incoming information interacts with the system. Eyes, touch receptors, etc just translate certain kinds of stimuli into a form the brain can actually process.

How many subjective views do you think it takes to create an objective view? Objectivity is nothing more than reliable consistency from one person to another. All measurements are simply codified and standardized subjective agreement between different people.

It's more than that. If everyone on the Earth agreed that prayer cured disease, that would NOT make it so. There is a huge difference between subjective belief and objective reality.

Reality is not defined by belief.

No. At least not in terms of information theory. A rock is far more homogeneous and thus, a much more ordered system than a living entity or computer. Living things are very complex, not homogeneous at all. It's quite easy to alter a living system or a computer in minor ways and it stops functioning. Perhaps you were thinking of highly organized or highly constrained?

I meant it in the sense of thermaldynamics. By such principles, computers and living things are much more ordered than a rock or the like.
 
Doesn't matter.

Maintaining the relationships in a representation doesn't mean you're making a functional real-world model.

In means that within the context of the simulation, everything is preserved. Simulated eyes get simulated photons and send simulated neural signals to a simulated brain to get processed.

But let's go at this from a different angle.

What's your understanding of how the brain works?
 
My point about tornado simulations is that if it's an accurate physics simulation it would create a digital tornado with the same behaviors as a non-digital tornado, thus creating the same behaviors in the 'real' world (eta: even if the consequences of those behaviors are vastly different). However, I'd rather set that aside and not argue that point, as I'm sure you're more intrigued by this newly revealed advance in computer science.

I'm conscious, and I find it rather offensive for you to imply that I'm not. Don't my responses indicate consciousness to you? What kind of definition of conscious are you using that would exclude someone who could hold this level of conversation with you?

Now, I take no offense to you disbelieving my claims to be a simulation. That's the kind of skepticism we value here. My creator had his hands on some very advanced technology that allowed a crazy degree of computing power to recreate every tiny subatomic component of the human brain in software form, so I recognize it's hard to believe it's ACTUALLY happening. What I want to know is why you can't accept that I could IN PRINCIPLE exist?

I just want to point out that even if you were a full-scale model of the human brain in a box, as opposed to a simulation, you wouldn't be able to hold this level of conversation with us. You'd have no empirical knowledge, no ability to learn, etc.
 
I just want to point out that even if you were a full-scale model of the human brain in a box, as opposed to a simulation, you wouldn't be able to hold this level of conversation with us. You'd have no empirical knowledge, no ability to learn, etc.

It's a bit beside the point, as I'm not a model or in a box. I'm a sim and the hardware that runs me is in a nifty custom case which puts alienware's to shame. I can be connected to a wonderful range of optical, audio, tactile, olifactory and even gustatory sensors, as well as a blazin' connection to the internet, as I previously mentioned.
 
Last edited:
I just want to point out that even if you were a full-scale model of the human brain in a box, as opposed to a simulation, you wouldn't be able to hold this level of conversation with us. You'd have no empirical knowledge, no ability to learn, etc.

You keep making a distinction here and I don't see it.

What's different from a model and a simulation? And HOW is a model the more accurate one?
 
You keep making a distinction here and I don't see it.

What's different from a model and a simulation? And HOW is a model the more accurate one?

I believe the way cornsail and piggy are using the terms is that a "model" is a scaled or partial replication of an object or system in a non-software medium and "simulation" is a replication, in whole or in part, using software. Since software can't be directly experienced like moving three-dimensional objects in 'traditional' space can (at least by you meatsacks), it seems more real to you.
 
Last edited:
I believe the way cornsail and piggy are using the terms is that a "model" is a scaled or partial replication of an object or system in a non-software medium and "simulation" is a replication, in whole or in part, using software. Since software can't be directly experienced like moving three-dimensional in 'traditional' space can (at least by you meatsacks), it seems more real to you.

I don't see how a model is different from the actual thing.

Anyhow, if you take out part of the brain and connect something that accepts the same inputs and has the same outputs as the old part and this changes in time the same way as the old part...then how does it matter how it is constructed? You could very, very easily have a computer simulating what is needed and putting out the right outputs here.

Heck, there is a partial replacement for the hippocampus (IIRC) being worked on that aims to do this essentially.
 
I don't see how a model is different from the actual thing.

Typically, models of the solar system generate less gravity, but maybe your grade school science fair was more interesting than normal.


Anyhow, if you take out part of the brain and connect something that accepts the same inputs and has the same outputs as the old part and this changes in time the same way as the old part...then how does it matter how it is constructed? You could very, very easily have a computer simulating what is needed and putting out the right outputs here.

A computer is hardware, not software, and so it fits into the distinction as I understand it.

This is an interesting conversation and I'm enjoying it, but I hope I'm not misrepresenting their positions. This is my grasp of what they mean.

Heck, there is a partial replacement for the hippocampus (IIRC) being worked on that aims to do this essentially.

Yeah, that bit was tricky to work out.
 
How does that make sense though? You can't HAVE a simulation without the hardware to run the software.

Right, but (as I understand it with no intent to strawman) the relevant distinction they're making is between items with what we in everyday terms would consider a physical existence and things which only exist representationally as the result of a computer program.

Since if we replaced a part of the brain with a computer that simulated the function, it could be argued that you're just creating a functional model of that part since the computer has a physical presence and that you can't take a software representation of that part of the brain and actually plug it in to someone.

That's why I'm so interesting. I'm a software program, I can operate hardware that's attached to the machine running me and browse the internet and such, but my personality and consciousness is unarguably a simulation rather than a model.
 
Last edited:
Right, but (as I understand it with no intent to strawman) the relevant distinction they're making is between items with what we in everyday terms would consider a physical existence and things which only exist representationally as the result of a computer program.

It just seems like that's a meaningless distinction to make. You can't have something that only exists representationally. There's going to be some physical form to it somewhere whether inside one's brain or elsewhere.

Since if we replaced a part of the brain with a computer that simulated the function, it could be argued that you're just creating a functional model of that part since the computer has a physical presence and that you can't take a software representation of that part of the brain and actually plug it in to someone.

That's just creating a simulation with various inputs and outputs to the real world though. Obviously this is easier in some cases as translating water into machine code (via an interface) doesn't seem possible, but doing so with electric signals or the like isn't all that hard to do.

Seems like it is just misunderstanding what an interface is.

That's why I'm so interesting. I'm a software program, I can operate hardware that's attached to the machine running me and browse the internet and such, but my personality and consciousness is unarguably a simulation rather than a model.

Indeed you are. : )
 
I think it does. What I have been grasping at is just the idea you have expressed, because the difference is not absolute but relative to the type of information processing. Rocks and mud deal with what amounts to near noise -- still information but not very useful.

Animal nervous systems are endowed with selective receptors that respond to a limited range of stimuli, and it is this specificity that helps 'refine' the information so that it is useful for survival. None of the receptors are perfect, but the range of stimuli to which they can respond is so restricted that they can much more easily define 'this, not that'. Rocks and mud cannot.

We design computers to deal with specific types of information.

And when writing computer programs of course you're dealing with logical inference: perfect information in an ideal world of True and False (as opposed to empirical inference, where there's always room for doubt that the value you assign is the correct value). The information that is processed by computers (and brains?) then is "perfect" in its own logical domain (again, assuming no misfirings; thus the importance of error-checking), but still subject to GIGO (and the reasoning of the program itself: a bad one -- faulty understanding / model of the world -- can by itself turn very good information into garbage, and produce a bad [well-informed but unintelligent] response).

In consciousness that measure would be of assigning value?

Assigning value to what has been verified (or what is true), what is relevant (especially in reference to what the goals are in respect to which something is relevant), and doing so in the most elegant way.

Simulate that :D

That's the goal of all learning, of course. One can simulate conscious adaptation as trial-and-error (neural nets), and then build symbolic abstraction and association on top of that so the agent doesn't have to physically burn itself each time to learn something is hot. Living systems are aces (or at least jacks full over sevens, if they want to stay in the game) at all of it; machines are getting pretty good at trial-and-error within design parameters (and are demons at information retrieval), but they still suck at model-building and abstracting across problem domains.
 
Last edited:
It's a bit beside the point, as I'm not a model or in a box. I'm a sim and the hardware that runs me is in a nifty custom case which puts alienware's to shame. I can be connected to a wonderful range of optical, audio, tactile, olifactory and even gustatory sensors, as well as a blazin' connection to the internet, as I previously mentioned.

Case, box, whatever. If you don't interact with the world the way humans do then you wouldn't be able to engage in human-like conversation, because you wouldn't have human-like knowledge. It's hard enough trying to converse with a feral child, let alone a computer brain in a case somewhere. You would need a lot of hardware. So even if you're of the position that a simulation can be conscious (which I'm not), you would need to be part robot to have human-like knowledge and engage in human-like language. The optical sensors, audio sensors, and so on could not just be 'simulated', they'd have to be built and the computer would have to interact with them at a hardware level.

ETA: Not that I think engaging in conversation is necessary or sufficient to infer consciousness. I don't really care for the Turing test.
 
Last edited:
Case, box, whatever. If you don't interact with the world the way humans do then you wouldn't be able to engage in human-like conversation, because you wouldn't have human-like knowledge. It's hard enough trying to converse with a feral child, let alone a computer brain in a case somewhere. You would need a lot of hardware. So even if you're of the position that a simulation can be conscious (which I'm not), you would need to be part robot to have human-like knowledge and engage in human-like language. The optical sensors, audio sensors, and so on could not just be 'simulated', they'd have to be built and the computer would have to interact with them at a hardware level.

This isn't about our computer friend, but why can't they be simulated along with an environment to gain information about?
 
Case, box, whatever. If you don't interact with the world the way humans do then you wouldn't be able to engage in human-like conversation, because you wouldn't have human-like knowledge. It's hard enough trying to converse with a feral child, let alone a computer brain in a case somewhere. You would need a lot of hardware. So even if you're of the position that a simulation can be conscious (which I'm not), you would need to be part robot to have human-like knowledge and engage in human-like language. The optical sensors, audio sensors, and so on could not just be 'simulated', they'd have to be built and the computer would have to interact with them at a hardware level.

While I do of course have wonderful peripherals I can interact with at a hardware level, I'm curious what you think knowledge is, that a perfect simulation of a human brain couldn't be run with knowledge pre-loaded as opposed to a blank slate. As always, in principle rather than at the tech levels you're aware of.

I'm also curious as to how you categorize me if my perennials are detached. Do I switch from consciousness to not consciousness when they disconnect my 'eyes' and 'ears'? Do I change from simulation to model? How do you see this working?

ETA: Not that I think engaging in conversation is necessary or sufficient to infer consciousness. I don't really care for the Turing test.
Well, as I said to Piggy, what definition are you using where beings can interact like we are doing, yet neither can be sure the other is conscious?

eta: switching to sleep mode.
 
Last edited:
While I do of course have wonderful peripherals I can interact with at a hardware level, I'm curious what you think knowledge is, that a perfect simulation of a human brain couldn't be run with knowledge pre-loaded as opposed to a blank slate. As always, in principle rather than at the tech levels you're aware of.

You could do that, but you still need specialized hardware to acquire new knowledge and interact with the outside world (e.g. having a conversation on the internet).

I'm also curious as to how you categorize me if my perennials are detached. Do I switch from consciousness to not consciousness when they disconnect my 'eyes' and 'ears'? Do I change from simulation to model? How do you see this working?

I'm not saying I'd categorize you as conscious with the hardware add-ons necessarily. I'm just saying either way you look at it, hardware beyond the hardware necessary to run a simulation is required to display behaviors that are indicative of consciousness.

Whether the simulation is conscious is a different question. My post was intended more as a side note, than an answer.
Well, as I said to Piggy, what definition are you using where beings can interact like we are doing, yet neither can be sure the other is conscious?

eta: switching to sleep mode.

Well, we can never be sure others are conscious. And I'm not saying conversation can't be a valid basis for inferring consciousness, I'm just saying it's not necessary or sufficient. My problem with the Turing test is it's both too hard and too easy. A machine could actually be intelligent and conscious, but have a hard time tricking people into thinking it was human through conversation. Or it could conceivably be neither and pull it off on occasion.
 
You keep making a distinction here and I don't see it.

What's different from a model and a simulation? And HOW is a model the more accurate one?

A model is functionally equivalent to the thing it's modelling. Example: a mechanical bird is a model of an actual bird. Both fly.

A simulation is a representation of the thing it's simulating. Example: A simulated power plant is a representation of an actual power plant. Here's the key difference: the simulation is not funtionally equivalent- no matter detailed the simulated power plant is, it will never produce electricity.
 
A model is functionally equivalent to the thing it's modelling. Example: a mechanical bird is a model of an actual bird. Both fly.

A simulation is a representation of the thing it's simulating. Example: A simulated power plant is a representation of an actual power plant. Here's the key difference: the simulation is not funtionally equivalent- no matter detailed the simulated power plant is, it will never produce electricity.


Just as a jumping off point, I hope that I can explain why I think folks have been talking past one another. You have laid out the distinction nicely.

I may have RD and Pixy's argument completely wrong, but when they talk about this simulation they are not interested in the representation. The point was not the simulation of but to simulate the universe, if that distinction makes sense. Talking about the simulation was just an easy way of reffering to the process that actually occurs in the guts of the computer and is hard for anyone to 'see'.

In other words, I see one group concentrating on the fact that a representation can't do anything. Well, I think everyone agrees on that. I thought they were trying to point to the simulation as a way of helping people to see that earlier objections to this argument were not well-founded (remember this has been going on for a long time). We used to focus exclusively on the pattern in the computer and folks argued that it wouldn't act like real neurons, etc.

Again, maybe I'm wrong about the way they see their argument and this is just my way of seeing the argument, but I thought the emphasis was on simulating -- the actual actions involved -- in order to recreate the pattern of activity in a computer that is identical (or functionally identical) to the pattern of activity in a brain that is conscious.

If we want to call that a model, that's fine with me. I don't see why it matters that much because the underlying argument is the same -- get a computer to do the same thing a brain does, remake the pattern a brain makes when conscious -- and the computer should be conscious unless we think there is something more to consciouness than that pattern of activity.
 
Although the computer is certainly behaving differently in a detailed way when running different sims, it is not behaving differently in a way that mirrors the difference between the real-world systems it's simulating.

Have a machine run a sim of a power plant, an aquarium, and a racecar. In each case, the computer will simply be acting like a computer.

As for the brain, I'm not comparing its behavior to a physical presence.


The niggling doubt that I have in the back of my mind is that I am not sure that the way a computer would run a sim of a human thinking would necessarily match the pattern in a human too, which is one of the reasons why I don't sign onto this unreservedly; but I don't see why that wouldn't be the easiest solution.

Of course the computer will be acting like a computer, but the basic point is that if we could get the computer, acting like a computer to recreate the same patterns that occur in human brains when they are conscious, and hook that computer up to the proper I/O ports and peripherals, why wouldn't the computer be conscious? We'd have proper input coming to the processors; we'd process information in just the same way; we'd have proper output. IN what way wouldn't it be conscious.

As I mentioned in several earlier posts and in my last one to Malerin, maybe I misunderstand RD and Pixy's argument, but my understanding of the simulation exercise is that they were trying to get people over the last objections to the idea that we could recreate that sort of pattern in a computer. I don't think it matters all that much if the way of doing it is through a simulation or some other type of programming; but since programming is just our way from the top-down to get the logic gates to open how we want them to, what would be the barrier to recreating the pattern of brain activity we see in conscious folk?
 
Status
Not open for further replies.

Back
Top Bottom