• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Has consciousness been fully explained?

Status
Not open for further replies.
The niggling doubt that I have in the back of my mind is that I am not sure that the way a computer would run a sim of a human thinking would necessarily match the pattern in a human too, which is one of the reasons why I don't sign onto this unreservedly; but I don't see why that wouldn't be the easiest solution.

Of course the computer will be acting like a computer, but the basic point is that if we could get the computer, acting like a computer to recreate the same patterns that occur in human brains when they are conscious, and hook that computer up to the proper I/O ports and peripherals, why wouldn't the computer be conscious? We'd have proper input coming to the processors; we'd process information in just the same way; we'd have proper output. IN what way wouldn't it be conscious.

As I mentioned in several earlier posts and in my last one to Malerin, maybe I misunderstand RD and Pixy's argument, but my understanding of the simulation exercise is that they were trying to get people over the last objections to the idea that we could recreate that sort of pattern in a computer. I don't think it matters all that much if the way of doing it is through a simulation or some other type of programming; but since programming is just our way from the top-down to get the logic gates to open how we want them to, what would be the barrier to recreating the pattern of brain activity we see in conscious folk?

You have to go back to my thought experiment with Guy.

The computer can't take the place of the brain, because a computer does physically what a computer does not what a brain does.

[ETA: See westprog's recent re-posting of the example of the power plant.]

The brain is a chunk of matter. You can replace it with a functional model, sure.

But that functional model has to be able to carry out all the physical actions of the brain in 4-D spacetime, just as a functional model of a leg has to be able to do the same.

You can't get that through programming.
 
Last edited:
Mathematics is the base language humans use to describe the world.

Logically, this implies that if there is no mathematical difference between two objects, a human cannot distinguish them.

Logic, piggy -- it is your friend. Well, maybe not your friend ...

Reason is my friend.

Maybe not yours.

And yes, I can indeed distinguish between an orange and a mathematical description of an orange.
 
With all the repetition, this thread is rapidly approaching zombie status, and clearly we've long ago abandoned any discussion of the brain for the tired old who-shot-John about hypothetical conscious machines, so let's cut to the chase on some important issues.

For instance, conservation of matter and energy....

Why is it that computationalism violates these accepted principles of physics?

Well, we know that consciousness uses up significant resources. Which means that the body is performing some physical process during consious awareness that it's not performing otherwise.

So for example, when I can't get to sleep -- as has been happening for the last few days -- what's going on is that my brain refuses to stop "doing consciousness". There's a resource-intensive physical process going on which won't shut down, no matter how much I wish it would. (Without a physical process, no behavior, no use of resources.)

So if we want to build a conscious machine, we have to make it use resources to "do consciousness" just as we would have to make it use resources to have a pulse.

Which is not a problem.

We can say that the pulse is the result of the actions of cells -- that is, the parts of the organic machine. In the robot, the action of the machine parts also produces a pulse.

Which is to say, I have to put a physical (not merely logical) apparatus in place to get the behavior.

But let's say I was to tell you that I'd built a man-made machine that also has a pulse and you say, "Oh, how do you do that?"

"I program it to have a pulse," I say.

"Ok," you reply, "so the programming helps manage the pulse rate, but how do you actually make the pulse happen?"

"What do you mean?" I ask. "There's no physical mechanism for the pulse. All I need is enough physical resources to support running the program."

"Hold on," you say. "You're telling me that you only expend enough resources in that machine to run the program, and no more, but as a result you get to run the program and you get a pulse?"

"Sure," I say.

Is this credible?

Well, no. You can't get behavior for free.

If programming is involved in my machine-with-a-pulse or my machine-with-consciousness, I must be using sufficient resources to support the programming as well as everything else needed to make the behavior occur.

In other words my machine must have some sort of functionally-equivalent physical mechanism to perform the feat. Programming alone cannot produce the behavior.

You cannot program behavior. If programming is involved, fine, but programming alone can't make a machine do the equivalent of what my body is doing when it runs, jumps, pumps blood, engages in consciousness, or does any other bodily function.

If you only expend enough resources to run logic, then all you get is running logic. You cannot get running-logic plus some other sort of behavior.

And a pulse is an essentially component for consciousness...how? First, at a basic level, the machine only would have to duplicate the essential characteristics of consciousness.

Second, you seem to not understand how a simulation works. A 100% accurate simulation will simulate every single aspect of something within the simulated environment. That means there's a pulse in that simulated environment and everything else. Granted, that pulse isn't made of real blood in the real world, but it is made of virtual blood that would act just like real blood in the simulated world.

Now, on what basis would you say such a being isn't conscious? If you give it a virtual copy of a book, it can read it. If you have the computer speak out a question or anything else, it can respond to it. The question here becomes, what do you define as "consciousness"? Is it thoughts? Is it self-reflection? What? Because thoughts and self-reflection are things this simulation WOULD have (which can be emphasize, if you wish, by giving it an interface to the non-simulated world if you wish).

If you disagree, then you have to explain how giving it can respond to its simulated environment just like a person would respond to a non-simulated one. Why it would respond to a real environment just like a normal person if given a proper interface (e.g. it could, for instance, chat very easily just like a normal person, sharing ideas, learning, etc). If you don't think it can do this, then please explain how you think the simulation will fail.
 
Here's another little peek into reality, and how it differs from sims.

Let's say I want to teach my son about the interrelationships involved in seal populations in the wild.

We use a computer to simulate this.

We start out very simply. We just have numbers representing seal populations, the populations of various species of fish, the number of fishermen and their harvests, the temperature of the ocean, and so forth.

We set up the relationships, and we can see how the seal populations fluctuate depending on the fish populations, predation, environmental conditions, and such.

It's just numbers, but I can tell my son which numbers stand for what, and we can see how it all works.

My son gets enthusiastic about it, and over time we ramp up the details. We introduce very detailed information about ocean currents, disease in the fish, even how employment rates cause the number of fishermen to change, and even down to how shortages in raw materials cause some boats to be added to the mix or taken out.

We go on from there, adding graphics, making it holographic in fact, representing everything in the system right down to the cellular level. We add projectors and you can actually step into this world and it's as real as if you were there.

In this simulation, there are numbers representing the molecules in each component. It's mind-bogglingly complex. It's so accurate that you can use it to make predictions about the real world.

And yet, despite all this, the computer that's running the sim never catches a fish, never swims, never gets divorced when a collapse of the fish population puts it out of a job, never lobbies Congress for a change in regulations, never freezes in winter.

In other words, the behavior of the computer continues to be computery -- never fishy, never watery, never windy, never sealy, never boaty, never fishermany.

Why? Because we haven't changed the computer into a seal. Or anything else.

We get tired of this eventually, and decide to look into the human body.

We begin with numbers representing blood pressure, height, weight, heart rate, and all sorts of other things.

Over time, we get more and more detailed, until there's nothing we can't know about our simulated human.

But the mechanism of the computer never gets a blood pressure, never runs through a shopping mall to catch up with its dad, never does anything that a human actually does... including being conscious.

There simply is no magical miracle moment at which a sim becomes sufficiently detailed that the machine running the sim stops acting like a machine running a sim and starts acting like the things which we imagine when we look at the outputs of the sim.

And that's why you can't get a machine to be conscious unless you build it to do the same physical stuff that makes a human being conscious.

You can't program your way into it.

There's no reason why there can't be conscious machines. But there's no such thing as a conscious program.
 
No, they don't.

How not?

Which is an entirely different question, of course.

If you care to have a discussion about how the brain works, that's great. We can get into that.

Trouble is, nobody knows how the brain does consciousness. We can't yet explain it.

We have only the beginnings of an exploration.

Where should we start?

I think we should start at the basic mechanics of the brain. So how do you think the brain works?
 
And a pulse is an essentially component for consciousness...how?

If this is where you start, you're headed straight for confusion.

You have utterly failed to understand what I'm saying.
 
You have to go back to my thought experiment with Guy.

The computer can't take the place of the brain, because a computer does physically what a computer does not what a brain does.

The brain is a chunk of matter. You can replace it with a functional model, sure.

But that functional model has to be able to carry out all the physical actions of the brain in 4-D spacetime, just as a functional model of a leg has to be able to do the same.

You can't get that through programming.

Really? Through programming my computer can replace my Super Nintendo, a machine made of completely different parts.
 
I think we should start at the basic mechanics of the brain. So how do you think the brain works?

This is a meaningless question.

As Richard Feyman once said, "No one knows how dogs work".

Can you get a bit more specific, please?
 
If this is where you start, you're headed straight for confusion.

You have utterly failed to understand what I'm saying.

Then explain, because I don't see how unrelated items are necessary to simulated in a strictly minimalist sense. Hell, replacements hearts don't even PRODUCE a pulse. So why are you niggling over non-essential behavior?

This is at the HEART of the issue, because it is my argument a computer could indeed produce all the essential behavior of the brain (with some interfaces to accept input and produce output for nerves).
 
Really? Through programming my computer can replace my Super Nintendo, a machine made of completely different parts.

I'm sorry, but your assertions are so muddled that they aren't even right or wrong. I have no idea how to respond to them.
 
That means there's a pulse in that simulated environment and everything else.

No, there's not.

At least, not in any way that matters.

When you look at the output of the sim, you may imagine a pulse. But that's as far as it goes.

There's no world there for anything to exist in.
 
This is a meaningless question.

As Richard Feyman once said, "No one knows how dogs work".

Can you get a bit more specific, please?

That's not a meaningless question. Let's try again. I'll reword it.

What are the major features of the human brain? What are the major cell types and what do they do inside the brain? What do we know about how these cells are organized and how they function and interact with each other?
 
I'm sorry, but your assertions are so muddled that they aren't even right or wrong. I have no idea how to respond to them.

How is that muddled? My computer can replace an entirely different machine. It doesn't need any of the special physical circuitry of that machine. All that physical stuff is modeled and software. It's your assertion that something modelled by software can never replace the physical thing it is modeling. ZSNES and tons of other software proves that assertion wrong.

If I am misunderstanding you, then please explain what you meant instead.
 
Durr.

Any "program" runs on a physical substrate.

So actually, nothing can be gotten by programming alone.

Please answer the question -- if a computer is running a simulation of a brain, and it is hooked up to i/o devices, and the whole construct acts conscious in the real world, is the simulation now a model?

Your distinction between a model and a simulation is arbitrary. You keep asking this stupid question "is simulated water wet?" Why don't you answer the other question -- is modeled water wet? What does that even mean?

A model of an aquarium may be wet. On the other hand, it may use other materials. As long as it meets the physical functionality demanded, it suffices.

Westrprog has already explained to you why your question about a computer running a sim hooked up to I/O devices makes no sense, so I'll leave it at that.
 
But a neuron does not function like a leg, piggy.

That's right.

Nor does it need to for the point I was making, which is simply that you can only replace a neuron or a leg with a functional model, but not with a digital simulation.
 
Westprog, not a single person who is claiming a computer can be conscious in a vague non-human way does not also firmly believe that a cockroach could not also be conscious in some vague non-human way.

Please explain how a roach's nervous system is sufficiently robust to support conscious awareness.

We don't yet know what the brain is doing, but we can be pretty darn sure that a roach ain't Marvin.
 
Everyone knows darn well that the computational side is all about function.

You've got to be kidding me.

Not only has there never been any demonstration that a computer can carry out the functions, not only is there no theoretical basis for such a notion, but you talk about mathematics, relationships (see "entification"), and in-sim frames of reference, but never about functionality.
 
That's not a meaningless question. Let's try again. I'll reword it.

What are the major features of the human brain? What are the major cell types and what do they do inside the brain? What do we know about how these cells are organized and how they function and interact with each other?

Where in the world are you going with this?

Please just cut to the chase.
 
Where in the world are you going with this?

Please just cut to the chase.

This thread is about the brain and consciousness, right? You don't see how talking about the physical characteristics of the brain and how it functions is relevant?
 
How is that muddled? My computer can replace an entirely different machine. It doesn't need any of the special physical circuitry of that machine. All that physical stuff is modeled and software. It's your assertion that something modelled by software can never replace the physical thing it is modeling. ZSNES and tons of other software proves that assertion wrong.

If I am misunderstanding you, then please explain what you meant instead.

See westprog's discussion of why a simulation of a power plant control system can't be hooked up to the power station and actually run it. See also my discussion of Guy as opposed to rocketdodger's flawed thought experiment which attempts to assert the equivalent of the assertion that westprog is wrong.
 
Status
Not open for further replies.

Back
Top Bottom