• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Explain consciousness to the layman.

Status
Not open for further replies.
I never said a simulated tornado is a real tornado. Ever.




Again.... I apologize for misunderstanding these posts...


Typically there isn't, I agree.

However, you are the only monist on the entire forum that seems to dispute the notion that an arbitrary granularity simulation of a society of people, down to the particle level, would contain actual conscious entities.

[snip]

[snip]

The world of the simulation is a world because it operates according to a set of mathematical rules. Those rules exist without humans, humans have nothing to do with it. Yeah a human builds the world, but once that is done, the world operates like our world does even when nobody is monitoring it -- according to a set of rules. The rules are invariant, just like they are in our world. The fact that the rules in the simulation are different from the rules of our world ( although not really, since mathematical isomorphisms will always exist ) has nothing to do with anything -- it is a world in and of itself nonetheless.
 
not much of a reader, then?

The simple reality is that people reading forums usually have quite a bit of reading to do in the first place, so it is only natural to expect them to skip the long boring posts.
 
Taken as a whole, this system can be said to include a reference and referent. However, we should keep in mind that the human experience of the tree bears no actual resemblance to the tree -- it's made up entirely of things that have no existence outside of experience (color, odor, sound, texture, etc.).
Sure. Hence, the separation between intension and extension and the identification of an intension as a thing in our heads, independent of the extension in one sense, yet caused by it in another.

But there's a causal correlation between the referent of the tree and the reference. The concept of the tree is formed by perception of it.
What's going on in the non-conscious brain is like the second example, which is why it can be problematic to talk about references and referents when it comes to the non-conscious (or pre-conscious or para-conscious) activity of the brain.
Well, yes and no. Yes, the concept of a pebble in the non-conscious mind is an effect, and so is a wave caused by the pebble in the pond. So in terms of being an effect, they're the same.

But no, this is neither a problem, nor does it entail that there's no distinction between the concept of a pebble in the non-conscious mind and a wave. There are very big distinctions. My non-conscious mind can utilize the pebble in a model of reality in order to adjust non-conscious intentions; it can, effectively, run simulations, and use the result of those simulations to plan actions.

Back to the tree for a clearer example, I could be walking along a trail with my friend being completely engaged in a conversation, turning around, advancing, and so forth. Without even thinking about it, I can adjust the path I'm walking such that I don't run smack dab into the tree--specifically because my non-conscious brain models the tree, models it as a solid object, predicts that it would not quite be an effective route to walk through the tree, and adjusts the path I'm walking.

Now, the wave in the pond could also potentially be used in a model of reality by some sort of planning agent like this. But I just don't see how it can, unless you explicitly put an agency there that can do it.

Being an effect doesn't make a thing an intension. Intensions are representations used in models of reality by agencies.
 
It's interesting that you here explicitly state that the means by which an outcome is reached are of more significance than the outcome itself. Thus one program can use intelligence to produce an outcome, while another can use brute force. To the external observer, of course, both will behave identically.

I will point out that this analysis contradicts a purely behavioural view of consciousness, whereby if an entity shares the behaviour of a conscious organism, then it perforce is conscious.

There's an obvious contradiction between these two viewpoints.
I used to think so too at some point, but as I understand it now, behaviorism actually accounts for these as internal behaviors. So now I'm not quite sure they are different. But I defer to the actual behaviorists :).

However, the main point here is simply that intelligence has this operational definition in AI. That in itself doesn't mean human intelligence behaves the same, but there are valid arguments that we use models of reality to analyze potential outcomes and use this analysis to direct actions.
 
Again.... I apologize for misunderstanding these posts...

First, none of these quotes mentions tornadoes.

Second, the statement "a simulated thing is real in the world of the simulation" is so obviously referring to the fact that a simulated tornado can blow down a simulated house that only someone trying to play to the crowd would mention otherwise. Are you playing to the crowd?

And even if I had said that, unless I am a drooling idiot, I certainly would not have meant that a simulated tornado is the same as that thing that killed 200+ people in Tuscaloosa last year.

Do you think I am a drooling idiot? No, really, you must think so.

I don't understand how the sentiment on my side of this argument, that if you can replace a neuron with an artificial neuron you can replace the whole brain with a computer that does the same calculations, somehow translates in the mind of you and piggy to the ridiculous nonsense that you guys have been posting.

Honestly, who would think that a simulated tornado is the same as a real tornado?
 
The computational approach would claim that if you just add enough pages and ever more detailed drawings, sooner or later you'll end up with something entirely identical.


You may well be quite right.... rationalization so as to stay committed to one’s beliefs can result in all sorts of amazing mental gymnastics.
 
First, none of these quotes mentions tornadoes.

Second, the statement "a simulated thing is real in the world of the simulation" is so obviously referring to the fact that a simulated tornado can blow down a simulated house that only someone trying to play to the crowd would mention otherwise. Are you playing to the crowd?

And even if I had said that, unless I am a drooling idiot, I certainly would not have meant that a simulated tornado is the same as that thing that killed 200+ people in Tuscaloosa last year.

Do you think I am a drooling idiot? No, really, you must think so.

I don't understand how the sentiment on my side of this argument, that if you can replace a neuron with an artificial neuron you can replace the whole brain with a computer that does the same calculations, somehow translates in the mind of you and piggy to the ridiculous nonsense that you guys have been posting.

Honestly, who would think that a simulated tornado is the same as a real tornado?



Well....great.... you are then finally in agreement with "Piggy et al".... good.

I am glad it was all a misunderstanding..... you can surely excuse the way I at least could have been confused by your assertions
the notion that an arbitrary granularity simulation of a society of people, down to the particle level, would contain actual conscious entities.
and
The world of the simulation is a world because it operates according to a set of mathematical rules. Those rules exist without humans, humans have nothing to do with it.


The above statements and a few others like them drove me to conclude erroneously that you might indeed believe that in the "world of a simulation" the "actual conscious entities" might be killed by a simulated tornado since they are in fact "actual conscious entities" that therefore can be killed and harmed by a tornado in the "world of simulation which is a world" just as in our world its analog "killed 200+ people in Tuscaloosa last year" who were equally conscious as the "simulated conscious entities".

I mean in the "simulated world which is a world" we could have also simulated Tuscaloosa and the “conscious entities” there would consciously be frightened by the (in their consciousness) all too real tornado in their “world which is a world”


But I am glad it was all a misunderstanding. I apologize for not realizing that in fact you are indeed in agreement with us. :thumbsup: … welcome to the “Piggy et al” camp.

ETA: I forgot to add....since you are ridiculing anyone who might "think" that a simulated tornado is as real as the real one then you must also be laughing at people who confuse a simulated consciousness with a real one....right?
 
Last edited:
You may well be quite right.... rationalization so as to stay committed to one’s beliefs can result in all sorts of amazing mental gymnastics.

Leumas gets a 7.5 on the parallel bars of rationalization.
 
Well page 11 and still plenty of stupid. But basically I agree with everything westprog has said except I disagree that calling the brain a control system is particularly helpful.
 
No, it simulates reality as a basis for its own actions.

Doesn't the subconscious brain make use of the simulation (model) of reality that is generated from perception? What else has it to base its projections and actions on? ...

Nobody's defined consciousness for the layman yet, and now you want to speculate about a subconscious?

Good luck.
 
Leumas gets a 7.5 on the parallel bars of rationalization.



Thanks Tsig.... I do not deserve your misplaced generosity I assure you.

You on the other hand fully deserve a gold medal in the same category.
 
I've previously given the example of catching a ball as something that can't be done as a computational process, because it is not sufficient to accurately calculate the trajectory of the ball - a signal has to be sent soon enough that a hand can reach out and grab the ball. Clearly, a system that cannot guarantee that it will send the signal in time is not plug compatible. This is why I point out that the Turing model does not describe what the brain actually does. Rocketdodger insists that the sequential, deterministic, time-independent sealed Turing model is able to describe the asynchronous, non-deterministic, time dependent interactive functions of the human brain. I dispute that.

It's certainly true that a GPCM could simulate the catching of the ball. Perhaps if the GPCM were extremely powerful, it could perform the simulation as quickly as if it really happened. However, it's a principle of the computational hypothesis that the conscious experience produced by the computation is entirely independent of how long it takes to run.

I'm aware that when people talk about replacing a brain with a GPCM, or computer, or artificial brain, they are thinking in terms of a machine that will actually allow the person to catch the ball. Such a machine would not be a pure computational device - it will be highly interactive. I consider it quite possible that some such device, able to control an actual human body, or precise simalcrum of such, might be conscious where a pure simulation, running only on computer hardware, might not. This is not the computational view, however.

How the heck does that work out? So I have my robot human body simulacrum and let's say that the computer brain part is too big to fit in the skull cavity so all I have there is a wireless receiver/sender of information from the computer and that wireless conduit passes on data to the actual artificial muscles and gets feed back from the sensors in real time, so it can catch balls and so on.

Are you saying that you can imagine a set up whereby the computer running that thing might be conscious but if you just had the computer running by itself it would cease to be conscious?
 
State of the Art - Machine Consciousness

2009 IEEE Symposium on Computational Intelligence and Games
Raúl Arrabales, Agapito Ledezma, and Araceli Sanchis
http://www.ieee-cig.org/cig-2009/Proceedings/proceedings/papers/cig2009_030e.pdf


"In this paper, we argue that current research efforts in the young field of
Machine Consciousness (MC) could contribute to tackle
complexity and provide a useful framework for the design of
more appealing synthetic characters. This hypothesis is
illustrated with the application of a novel consciousness-based
cognitive architecture to the development of a First Person
Shooter video game character.

C. Machine Consciousness
MC is a young and multidisciplinary field of research
concerned with the replication of consciousness in machines.
This is indeed a vast area of research, where different
subareas can be identified: design of machines showing
conscious-like behaviors, implementation of cognitive
capabilities associated with consciousness, design of human consciousness inspired architectures, and creation of
phenomenally conscious machines [19]. In this work we will
focus on the first subarea: reproducing conscious-like
behaviors. Delimiting the specific scope of this research line
needs some clarification about the other related subareas.
The phenomenal aspect of consciousness T the subjective
conscious experience, or the what is it like to be conscious
[20] T is the most controversial issue in consciousness
studies. In order to avoid this dimension of consciousness
for the moment, it is useful to conceptually distinguish
between two main aspects or dimensions of consciousness:
phenomenal consciousness (P-Consciousness) and access
consciousness (A-Consciousness) [21]. While PConsciousness refers to subjective experience and qualia A-Consciousness refers to the accessibility of mental
contents for reasoning, volition, and verbal report. Whether
or not P-Consciousness plays a functional role in humans is
a controversial issue (and could be thoughtfully discussed
elsewhere, see for instance [23-25] for different arguments
about this issue). Nevertheless, in the context of this work
we will focus exclusively on A-Consciousness, i.e. wellknown functional features of consciousness and how they
can be integrated in a cognitive architecture.
[22]

II. A COMPUTATIONAL MODEL OF CONSCIOUSNESS
A review of the main scientific theories of consciousness
is out the scope of this paper (see [31] for such a review). As
mentioned above, the proposed architecture is mainly based
on the GWT and MDM. These theories provide rather
metaphorical than technical descriptions of the main
conscious processes in humans. Therefore, different
computational models could be inspired by their principles.
CERA-CRANIUM is one example; see [32, 33] for other
AC implementations also based on the GWT but oriented to
other problem domains."
 
What are the relevant properties?
Depends on what type of consciousness it is. In general, there would probably be a propensity to develop more novel solutions to problems, than otherwise, but it is hard to quantify something like that, at the moment. And, such a talent is not necessarily indicitive of conscious thinking.

For a more human-like consciousness, a sense of free will (whether that freedom is real or an illusion) is probably a good start.

Passing the Turning test would be a good bonus, I suppose. Some might argue a P-zombie could pass one, but then again, other people argue that we are all really P-zombies, anyway.

I once wrote up a post, in another thread, with other things to look for, but I will have to dig it up, later.
 
Status
Not open for further replies.

Back
Top Bottom