• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Has consciousness been fully explained?

Status
Not open for further replies.
In consciousness that measure would be of assigning value?

Assigning value to what has been verified (or what is true), what is relevant (especially in reference to what the goals are in respect to which something is relevant), and doing so in the most elegant way.

Simulate that :D


Yeah, I think that is one extremely important part of it where consciousness is concerned. That is why I have tried to get a conversation started about "what is feeling?" since value seem to arise in feelings. Valuation is obviously a much higher level function (?) than simple receptor triggering, but the principle should still be the same.
 
And when writing computer programs of course you're dealing with logical inference: perfect information in an ideal world of True and False (as opposed to empirical inference, where there's always room for doubt that the value you assign is the correct value). The information that is processed by computers (and brains?) then is "perfect" in its own logical domain (again, assuming no misfirings; thus the importance of error-checking), but still subject to GIGO (and the reasoning of the program itself: a bad one -- faulty understanding / model of the world -- can by itself turn very good information into garbage, and produce a bad [well-informed but unintelligent] response).


Yes, exactly. And that is one of the problems we have discussed in the past and I have no idea how to overcome with programming when it comes to differences between computers and how people work. The information in computers is 'pristine' while everything we receive and do has a fuzzy tinge about it, for want of a better expression.
 
Non sequitur.

Yes, I must be there to see that action as addition.

But I don't have to be there in order for it to be a fact that two objects have indeed been aggreggated with two other objects to create a group of four objects.



I think we are saying the same thing here.
 
You know, it's funny that we always get this onslaught from folks who want to think about consciousness in terms of conscious computers, which don't exist and have not even been designed and don't even have a theoretical underpinning, when we don't even yet understand how the brain generates conscious experience.

I suppose it's easier when you don't have to deal with the messy reality of things.

But the simple fact is, if we want to understand consciousness, we must first understand what the brain is doing. No way around it.

To believe we can understand it instead by discussing computers, which are not conscious, is laughable.

Yes, computers can help us in our efforts to understand the brain, but they can also help us understand weather, and no one thinks that you can actually create weather by programming.



I don't think we can understand it by discussing computers. My only interest is in exploring if it is possible, in theory, with a computer. I think computers can be very helpful tools in unerstanding how it all works. This is a huge issue and I think we need to use all the tools at our disposal to help understand how the brain does it.
 
I'm not sure this definition is very good if the conclusion one takes is that a rock receives information about the weather from a falling tree. That sounds like a fairly silly thing to have one's definition conclude.

I think this has been addressed -- it is information, just potentially very bad information.
 
So you're okay with the conclusion that a rock processes information? Or does "processing" require something else?

Depends on what people mean by "processing."

When people like Pixy say "processing" I assume they mean a series of causal switch events that cascade in an ordered way, like what happens in a brain or a computer when new input arrives and is analyzed.

If one wants to instead call any behavior at all "processing" then yeah a rock "processes" information, but in that case you need yet another term for what brains and computers -- or even simple cells, actually -- do, because it is somewhat different.
 
I think this has been addressed -- it is information, just potentially very bad information.


Yeah, I see that now; my mistake. Blobru's explanation helped a lot. He crystalized what I was trying to say (and making a hash of because I didn't have it quite straight in my own mind) very well.



Happy Birthday.
 
Mathematically?

Do you think we live in math world?

Tell you what, run a simulation of air, then try to breathe it.

Mathematics is the base language humans use to describe the world.

Logically, this implies that if there is no mathematical difference between two objects, a human cannot distinguish them.

Logic, piggy -- it is your friend. Well, maybe not your friend ...
 
Depends on what people mean by "processing."

When people like Pixy say "processing" I assume they mean a series of causal switch events that cascade in an ordered way, like what happens in a brain or a computer when new input arrives and is analyzed.

If one wants to instead call any behavior at all "processing" then yeah a rock "processes" information, but in that case you need yet another term for what brains and computers -- or even simple cells, actually -- do, because it is somewhat different.


Is it actually different, or is it just more specific -- akin to Blobru's formulation for the specificity of information?
 
The argument that if a neuron simulation calculated the proper state 1000 times too slow or too fast it wouldn't "work" with other neurons is both trivial and obvious.

Coupling.

If the whole thing is simulated, time in our frame is no longer relevant -- only time in the simulation. Church-Turing is correct.

If only part is simulated, then time in the simulation must couple correctly with time in our frame. Obviously this requires something in addition to a "Turing machine" just like any two systems require an interface in order to interact with each other.



Durr.

Thats why I was careful to say "and a suitable interface" in every relevant statement of that post.

Try reading, westprog. It helps.



Well Al Bell said so, and you jumped in.

You contend that if we replaced a single neuron with a simulated one + interface, the brain would cease to be conscious?

What I don't get here is that you keep skimming over the point that a Turing machine + suitable interface doesn't work. Can't work. It won't be a Turing machine any more. It will be a different device altogether.

I know that you keep coming up with ways to avoid this. Yes, a perfect implementation of a Turing machine isn't possible in the real world. So what? The same applies to any design or concept. If we were to use that as an approach, we wouldn't be able to reason about any system.

What makes the concept of the Turing machine useful is that we can make predictions about computations. These predictions are of great practical value. We know that we can launch our Pascal computations into the time-sharing computer, and not worry about implementation details or interaction with the world - and be sure that the program which takes an hour will give exactly the same result as one that take a millisecond.

This is clearly not the case with the replacement neuron. To talk blithely about coupling is to miss the point that a coupled Turing machine is not a Turing machine, and the reasoning we use about Turing machines no longer applies. A Turing machine is, by definition, a closed, non-interacting system. The people who design computers and operating systems have to go to great lengths to provide environments where programs could operate as if they were Turing machines. In almost every case, the computer and operating system which runs the programs has to use a different model, because the Turing model isn't appropriate for running a computer. I gave a link to a paper describing the issues involved in coping with these issues.

So when describing a device which can replace a neuron in a human body, the Turing model is simply irrelevant. Turing-style programs are designed to work as closed systems. The neuron is designed to be open, time-dependent, asynchronous, reactive. The Turing model is of no help in understanding or replacing neuron behaviour.
 
This is begging the question, assuming your conclusions.

A conscious program cannot be created, because one cannot get consciousness in a machine by programming alone, just as one cannot get a pulse out of a machine by programming alone.

Durr.

Any "program" runs on a physical substrate.

So actually, nothing can be gotten by programming alone.

Please answer the question -- if a computer is running a simulation of a brain, and it is hooked up to i/o devices, and the whole construct acts conscious in the real world, is the simulation now a model?

Your distinction between a model and a simulation is arbitrary. You keep asking this stupid question "is simulated water wet?" Why don't you answer the other question -- is modeled water wet? What does that even mean?
 
Last edited:
Is it actually different, or is it just more specific -- akin to Blobru's formulation for the specificity of information?

There is no qualitative difference.

But then again, there is no qualitative difference in known reality at all, other than the discrete differences between fundamental particle types and maybe fundamental forces.

99.99999% of what people consider "qualitative" differences are actually just stacked quantitative differences.

Thus the difference between what a cell does and a rock does with information is just a whole lot of quantitative difference.
 
You contend that if we replaced a single neuron with a simulated one + interface, the brain would cease to be conscious?

If a neuron can be replaced by an artificial neuron that can perform the same function, then I don't see it as impossible to replace the whole system. Since we can't do this at present, it seems highly likely that we don't understand the full function of a single neuron.
 
There is no qualitative difference.

But then again, there is no qualitative difference in known reality at all, other than the discrete differences between fundamental particle types and maybe fundamental forces.

99.99999% of what people consider "qualitative" differences are actually just stacked quantitative differences.

Thus the difference between what a cell does and a rock does with information is just a whole lot of quantitative difference.


Yes, good, that is what I was trying to get at.
 
This is such a fog of confusion and conflation, it cannot be logically responded to.

By imposing these conditions, of course, you're simply avoiding point altogether, because I wasn't discussing your perception of a leg, I was discussing your leg.

But a neuron does not function like a leg, piggy.

The function of a neuron that impacts the rest of your body is limited to ions flowing across the intercellular space.

The function of your leg that impacts the rest of your body involves propping you up, allowing you to move with bipedal locomotion, and even when you are just lying there many nontrivial forces are exerted on your hip due to the weight of your leg alone.

In other words, a neuron does not function like a leg, piggy.

So if you want to somehow make the analogy valid, you need to take the vastly differing function into account.

Hey you are the one that has been harping about physical function, are you not?
 
Much like how say...a Turing machine, interprets the string of symbols.

There's a reason for using definitions rather than lists. If you simply list off all the things you claim "interpret" data, that is a lot less convincing than giving a rule by which anyone can decide whether a system is interpreting data, or just dumbly responding to its environment. It you just say "this and this and this are interpreting, this and this and this aren't" and leave it as an exercise to say what the difference is, then that's not an argument, that's just a claim.
 
A model is functionally equivalent to the thing it's modelling. Example: a mechanical bird is a model of an actual bird. Both fly.

A simulation is a representation of the thing it's simulating. Example: A simulated power plant is a representation of an actual power plant. Here's the key difference: the simulation is not funtionally equivalent- no matter detailed the simulated power plant is, it will never produce electricity.

This is bollocks.

Your mechanical bird here is only functionally equivalent on a single function -- flight.

If you considered many other functions, it would cease to be a model.

I can say a simulated brain is actually a model paperweight, because after all the computer housing the simulation does indeed satisfy the function of a paperweight.

Likewise, if you consider the text on this screen, one can say it is both a simulation and a model of actual paper text. Can you not? After all, it is "functionally equivalent" as far as a reader is concerned.

Oh, but it isn't really a model, you say, because you can't light it on fire like you can light paper on fire.

But you can't watch a mechanical bird lay an egg, either. So now a mechanical bird isn't a model?

Eh?
 
You didn't define it as far as it matters to the brain. This is what I think you must be referring to:



I see no definition in there. "We can define the information the brain receives in terms of nerve impulses and the bits of hormones and the like" isn't a definition.

If you didn't have an obstructive and troublemaking agenda, you wouldn't be asking for a definition. What are your religious beliefs, anyway.

Is your definition of information "something that can be taken, processed and result in output"? If that's the case, then everything is information and all physical interactions are information processing.

BTW, why did you link me to the wiki for Shannon's information theory when I initially asked the question? I was expecting some sort of elaboration.

Until you've totally finished your reading list you cannot speak authoritatively on this subject, or indeed any other.
 
It runs on a power source brought in from outside the system.

So what?

That was at some time in the past.

Are you now saying that people can't specify time constraints when they define "systems?"

The lengths you go to in order to equate computers and rocks is quite amusing.

So now a "rock" and a "computer" are actually the collection of particles and set of causal events throughout the history of time that led up to the current rock and current computer?

Wtf are you talking about?
 
Status
Not open for further replies.

Back
Top Bottom