• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Has consciousness been fully explained?

Status
Not open for further replies.
So, for the computationalists, let's say that instead of running a sim of one person, you have your computer run a sim of several people.

Now, your computer is conscious of several different minds at once.

Oh noes! Your computer would go insane!

Would this also mean that the people in the sim could read each other's minds?

When I'm conscious, my locus of awareness is consistently in the area of my cranium. So for a computer that's simultaneously conscious of many different brains, they would certainly all be centered around the same phyiscal region of the computer mechanism.

Unless the computer has a way of divvying them up into different regions. But how would that be accomplished?

You clearly understand nothing about the computational model of consciousness.
 
Would you care to describe this "simulated neuron" as well as the method you will use to graft it into the brain?

Well for simplicity lets say we just take an entire neuron and make a simulation that includes the behavior of the membrane -- everything else about a neuron just supports the membrane behavior, the membrane is what is important.

In fact I can write a program that does an approximation of that quite easily -- I doubt you have a concern with this aspect.

Then the interface could be some ion detectors that register the ions diffusing across intercelluar space at synapses from other neurons, and some kind of ion producing output devices that mimic that same ion diffusion at downstream synapses.

I don't see why it would be so magical to, for instance, simply control an ion dispenser with a computer rather than the natural biological mechanism that currently exists. If the ions flow, what is the difference?
 
What does "vague, non-human way" mean when it comes to consciousness? (Yes, I read that, too.)

And what's your design for the conscious toaster?

Do you think a squirrel is conscious?

That is pretty non-human, no?

What about a bird?

A fish?

there you go
 
Which is unrecognizable in terms of the actual study of consciousness.

What nonsense.

At the very least, we know if an entity is standing in front of a mirror, and it raises it's hand and runs it through its own hair, there is a very good chance it recognizes itself.

Why is self-reference such a mystery for so many people? The concept is just about as simple as you can get ..
 
Then you hold an indefensible position.

Consciousness is something the body does.

It doesn't matter if anyone is able to produce something which fools you into thinking it's conscious when it's not.

I mean, really... this is flawed thinking at the most basic level.

You might as well say that the test for whether or not someone's being honest with me is if they can convince me that they're being honest.

Or that the test for how fast an object is traveling is how fast it makes me think it's traveling.

You might as well say that the test for whether or not you are conscious is if you can convince yourself that you are conscious.

Oh ... wait a second ....
 
A random process, carried out long enough, will eventually replicate any set of rules. But that's beside the point.

What is the mechanism by which consciousness arises from people writing down 1's and 0's? And what becomes conscious? The paper?

You don't get it

The computational model is about causation.

If a random sequence of events is not observably different from a similar causal sequence, then it is impossible to say if the sequence was causal or random.

A sequence that is observably random is not causal. Thus by definition no random sequence of steps is equivalent to any algorithm.
 
That's some sort of joke, right? You think online poker is something that requires consciousness to play? Are you going to say the same about Chess too? Picking an example where interaction is minimized is a bit silly.

You're moving the goalposts. The original exchange was:

How does that follow? We can be fooled by something that appears to be conscious, but isn't.

Drachasor said:
Really? Name an example that holds together for a sustained duration.

I did. Poker bots certainly appear to be conscious: they're programmed that way. People won't play against bots, so the designers make the poker bot's action as human as possible.

Now you're moving the goal posts with qualifiers like "sustained duration" and "level of interaction". What if the person who's being fooled by something that appears to be conscious isn't that bright? Consider the following exchange on Jabberwacky:
[Jabberwacky responses in bold]
Why should I stay?
Why should you go?
I don't know.
You're not that smart...
**** you.
Don't be rude.
I'm sorry.
You better be sorry.
Excuse me?
Yes. You heard me.
You're not very nice.
I'm sorry.
You're forgiven.
Thank you.
How's the weather?
Not a cloud in the sky.
It's cold here.
I see. Where are you ?


Apparently, I'm not that bright. So let's say I was fooled by this chat bot. The conversation was "extended duration". Does that mean Jabberwacky is conscious?

Levels of interaction and duration of being fooled have nothing to do with the fact that people are not infallible. We can be fooled by something that appears to be conscious. So then, how do you go from "appears to be conscious" to "is conscious"? You can't. Only you know if you're conscious or not.
 
It will cause the simulated universe to stop work.

A simulation that is an accurate representation of the big bang will start a simulated universe to start working.

A simulation that is an accurate representation of a mechanic cypher will encode information.

A simulation that is an accurate representation of a PS1 will run playstation games (that are on simulated CDs, of course...or real CDs if you have it translate information from your CD drive).

A simulation that is an accurate representation of a brain with input and output will take in stimulus along simulated nerves and output stimulus along simulated nerves. Feed it an education and Shakespeare via the inputs and it will simulate a changing and learning brain, and simulate emotions and feelings regarding the Bard. It would be able to simulate the creation of a term paper and simulate a real brain's reaction to emotional events. It could simulate falling in love. How would this not be conscious?

It could simulate -- to the imagining mind of one who knows how to read it -- all sorts of behaviors we associate w/ conscious beings.

That, however, has nothing to do with the question of manufacturing machines which themselves can be conscious.
 
The brain is a massively parallel information processing system. It uses neurons to create patterns by adding and deleting the connections between them. A neurobiologist could go into great detail about synaptic learning and how the axons in the neuron can vary the connection to create different kinds of patterns and signal flow.

However, the bottom line is that it is, in principle, an organic computer. The program (eta: well, one program) this organic computer is running either produces or is called consciousness, depending on viewpoint.

If you can simulate that on another kind of computer, if you emulate the organic computer and run the same program, it is only to be expected that the program once again either produces or is conscious.

The brain is a mass of physical stuff. It acts just like any other mass of physical stuff.

One of the things it does is perform a behavior we call consciousness.

If we want to build a machine that does the same thing, it will have to perform an equivalent set of actions in 4-D spacetime in order to achieve that result.

On the other hand, a machine that runs simulations is a machine that runs simulations. If that's what it's built to do, that's what it does, regardless of the real-world behaviors of the systems which it symbolically represents in simulations.

If you want a machine that does what the brain does, including varying the synaptic connections, then you have to build a machine that actually does that, not a machine which simulates a system which does that.

Your claim that the brain is an "organic computer" is unfounded.
 
I am saying it will behave like that physical system within the simulation.

Which makes no sense, because the machine does not behave like anything within the simulation, unless you're simulating a computer as a redundant system.

What happens symbolically within the frame of reference of the simulation -- which only exists in the imagination of the interpreter -- only happens symbolically within the frame of reference of the simulation, which only exists in the imagination of the interpreter. (Tautology, that.)

And that has nothing at all to say, and no influence upon, the mechanisms of the apparatus designed to support the simulation.

In order to produce a conscious machine, you have to produce an apparatus that mimics the physical activity of the brain, because that's what makes consciousness happen.

On the other hand, if you produce instead a machine that runs simulations -- which is a behavior that our brains don't perform -- then you get that rather than a machine which behaves like a brain.
 
No, my point is that for something like consciousness, "real within the simulation" isn't any different from "real." If you can't tell the difference in behavior when you hook it up to the outside world, interact with it, or whatever, then it is really conscious.

Uh... are you saying that human beings have screens which play videos?

Let's keep our frames of reference straight.

If behavior is your criterion, then you have to admit that a computer running a simulation does not behave at all like a conscious being.

But in any case, no, it's not true that we can assume that a thing actually does X if it can fool a person into thinking it does X.
 
What's the prevailing wisdom on whether human consciousness is somewhat (perhaps even completely) "learned" versus being "wired in" via genetics/evolution?

Consider a newly born baby. What kind of consciousness might it be experiencing? Presumably it has essentially no idea of the meaning of language, or what to make of the input arriving from eyes and ears except in a rudimentary way. Some input has been arriving while it was developing in the womb so those systems have been exercised to some degree but I'm not sure I can imagine what "meaning" it might consciously attribute to any particular inputs at that point.

I'd be interested in knowing more about the experiences of people who may have been completely blind or deaf "from conception onwards" (so far as that makes sense) and then regained those senses at some much later stage of life when they were able to communicate reasonably clearly what that experience was like for them.

If a human embryo developed in such as way that none of usual senses were functioning, would we still expect consciousness to be present later (assuming the body as a whole still continued to grow and function "normally", insofar as that was possible)?
 
All evidence indicates this is the case.

Actually, there is no evidence to indicate that consciousness is information processing, for the same reason that there's no evidence to indicate that black holes are mathematics.

We can describe some of the behavior of the brain in terms of IP, and we can describe black holes in terms of mathematics, but that's as far as it goes.
 
Cause you keep asking if simulated water would be wet, and you think a "model" is somehow better than a "simulation." So I was wondering if "model" water can be wet.

Better?

I don't know what that means.

They're certainly different.
 
You clearly understand nothing about the computational model of consciousness.

There is no computational model of consciousness.

No one has offered any explanation of how it would work.

What I do understand, however, are the basics of physics and some of the more well known studies on the brain.
 
Well for simplicity lets say we just take an entire neuron and make a simulation that includes the behavior of the membrane -- everything else about a neuron just supports the membrane behavior, the membrane is what is important.

In fact I can write a program that does an approximation of that quite easily -- I doubt you have a concern with this aspect.

Then the interface could be some ion detectors that register the ions diffusing across intercelluar space at synapses from other neurons, and some kind of ion producing output devices that mimic that same ion diffusion at downstream synapses.

I don't see why it would be so magical to, for instance, simply control an ion dispenser with a computer rather than the natural biological mechanism that currently exists. If the ions flow, what is the difference?

If you're replacing the neuron with a physical apparatus which happens to incorporate a computer, that's not a simulation of a neuron, that's a functional model.

No one denies that a concsious machine can be created in theory, nor that such a machine could well incorporate a computer.

But that's a far cry from saying that a "conscious program" can exist, or that we can get consciousness from programming alone, i.e. by using only enough hardware to support running the logic.

In your example here, for instance, you include hardware to carry out the physical behavior of the neuron.

Similarly, your conscious machine will need to include hardware to carry out the physical behavior of the brain.
 
Do you think a squirrel is conscious?

That is pretty non-human, no?

What about a bird?

A fish?

there you go

A squirrel is probably conscious because its brain is likely doing the same physical stuff that makes us conscious, too.

So, what about your conscious toaster? How is that designed?
 
What nonsense.

At the very least, we know if an entity is standing in front of a mirror, and it raises it's hand and runs it through its own hair, there is a very good chance it recognizes itself.

Why is self-reference such a mystery for so many people? The concept is just about as simple as you can get ..

The problem here is, we don't yet have any way of knowing whether such behavior could result from brain activity that doesn't involve consciousness.
 
You might as well say that the test for whether or not you are conscious is if you can convince yourself that you are conscious.

Oh ... wait a second ....

We don't need tests for ourselves. We observe our own consciousness starting and stopping.

We don't need tests for other people, because we all have similar enough brains.

Unless you're Terri Schiavo, in which case we can say that she was not conscious -- despite some behavior which appeared superficially to be similar to conscious behavior -- because the regions of the brain necessary for doing consciousness were destroyed.
 
Status
Not open for further replies.

Back
Top Bottom