"Intelligence is Self Teaching" A paranormal experience into A.I and Intelligence.

I think you are having issues with terms and integrating neural nets with conscious and unconscious processes...
this might help...

also try Bayesian brain model ( which I favour )

•••

I'm made reference a number of times to how I perceive the intergration of mind and body as a neural net which also allows extension via tools....

This is an interesting article that I agree with in terms of perception/mind but goes further in showing how the actions of the body impact the state of mind....
a few of our philo debaters on consciousness maybe should read this :rolleyes:

Yet there still is no mental self until thinking commences. It is not a substantial "thing," as I'm sure we both agree. The sense of it existing emerges purely as an artifact of thinking. No thinking = no mental self.

Nick
 
I certainly wish someone here somewhere on JREF was more articulate in representing the materialistic claim to the origins of consciousness. This is the same problem I have when having this discussion with anyone who claims the hard edged materialistic position, they can never seem to explain it fully. There is always a set of information that is either avoided in consideration or simply ignored, I assume because it does not fit their framework of reality.

But science is unfinished and certain issues may never be fully resolved. This to me is just being honest. A philosophy will inevitably have limitations. The basic materialist claim as I understand it about consciousness is that it is processing. It emerges from processing. That's it really. Others will say that you need to have a certain amount of feedback loops present to call it "consciousness."

Nick
 
I certainly wish someone here somewhere on JREF was more articulate in representing the materialistic claim to the origins of consciousness. This is the same problem I have when having this discussion with anyone who claims the hard edged materialistic position, they can never seem to explain it fully. There is always a set of information that is either avoided in consideration or simply ignored, I assume because it does not fit their framework of reality.

To me, all they are saying is 'if you have x in place, and y in place, and z in place, and g, h, and l are happening, then *poof* you have consciousness. Yet none of them explain 'how' x, y, and z in relationship to g, h, and l makes information or process conscious, just that it somehow does, or must.

Remember, philosophy works within limits (and should always state them). A philosophy, which is what materialism is, doesn't explain every detail. It simply follows assumptions, and sketches out the results of those assumptions as best it can. So the materialist account of consciousness isn't a technical manual for creating consciousness from matter, it's a rough outline how the manual might be written, what sorts of things will need to be included. A full technical manual for creating consciousness doesn't exist yet; possibly never will.

The debate among philosophies (materialism / physicalism, idealism, dualism, etc.) and various models within those philosophies is over which assumptions produce the most complete, coherent and consistent, outlines; which set of assumptions agrees best with facts we have about the world; which conclusions from which set of assumptions agree best with the facts we have about consciousness. Even if the logic is flawless (which is always arguable), the outline will be incomplete, because the data is. Applies to all philosophies.

So the materialist claim isn't, "here's consciousness explained: neuron by neuron, bit by bit, quark by quark"; it's "here's what we know about the brain and body and computers; here's what we can conclude from that; look how much it explains! look how it agrees with what we know! not bad, huh? a lot better than those other stupid philosophies..." (materialism's kinda rude sometimes, given its successes in the last few centuries; though some of the things idealism says about it! -- wow -- bitter much?).

They always claim that consciousness must be created by the brain, is the brain, and nothing more than the brain, and they always seem to say that consciousness is a result of computation that is not medium specific (meaning computers can create consciousness just like 'meat' of biology can), yet when you present claims of the Universe as A Quantum Computer, with the inference therefore that the universe is conscious in some sense based on that claim, I get nothing but silence.

When it comes time for material reality to transcend itself into the very unknown thing that materialists themselves want to deny exists, I believe we witness materialists themselves confronting the 'woo-z-ness' of their own belief system.

The 'materialist' claim, following the computational model, isn't -- or at least shouldn't be -- that all computation / information processing leads to consciousness. Consciousness should emerge from certain ways of processing info, from running certain types of programs, only. In the same sense, MS Word or Jewel Quest or the JREF bulletin board don't emerge from just any information processing, only from their specific programs processing information.

If the universe is a quantum computer, then it is running a program from which the universe emerges. That's the only supported inference. Certain systems within the universe -- Bubblefish, Nick227, Limbo, PixyMisa, !Kaggen, macdoc, blobru (well, sometimes, almost) -- would be conscious, because they are running specific programs that generate those particular consciousnesses. The universe would only be conscious if it is running a consciousness program.

"Mustn't the universe be conscious if parts or it are?" we might wonder. Well, the universal program would certainly contain consciousness programs. However, a top-level program doesn't automatically take on the functionality of its subprograms (which we would be, if the universe is really a quantum computer). In computer science, this is known as "encapsulation": subprograms do things which the main program isn't concerned with; all it 'sees' is input to and output from the subprograms. So, in a computational model, our consciousness programs would simply be taking in and returning material data to the universal program.
 
Why bother bringing symbols into it? It's clear without them.


It was you who brought symbols into it when you said 'little bits of language'.

The brain may be undergoing immense amounts of processing beneath the conscious thinking, and for sure the brain is highly configured to process unconsciously for selfhood. Yet the mental self is a conscious phenomenon. No thinking = no mental self. And as "experience" merely emerges from this process also then no thinking = no experience.


What about the experience of not-thinking?

So called "experiencing" performs a social function. If you were trapped alone on an island, after a while experiencing would stop! There would still be thoughts but no longer reason to construct experiences.


Do you have any evidence to support this?
 
What about the experience of not-thinking?

Who is there to have it, without thinking? Thinking packages sensory data into subject-object format. There is actually nothing inherent in sensory information which indicates subject-object relationships. It's just information. There's nothing in it to suggest that it belongs to someone, or that one bit is me and another bit you. Of course, our brains have been conditioned through evolution to process data according to the needs of the self, yet there is still nothing inherent in sensory information to indicate an owner or a subject-object divide.

Meditators will tell you that sensory information is fundamentally non-dual. And materialism reinforces this.

Do you have any evidence to support this?

No. And I've little intention of trying it out!

Nick
 
Last edited:
They always claim that consciousness must be created by the brain, is the brain, and nothing more than the brain, and they always seem to say that consciousness is a result of computation that is not medium specific (meaning computers can create consciousness just like 'meat' of biology can), yet when you present claims of the Universe as A Quantum Computer, with the inference therefore that the universe is conscious in some sense based on that claim, I get nothing but silence.

Now you're back with intelligent design, as I see it. Can you find evidence for this? Animals are conscious because they are subject to environmental pressure and consciousness is evolutionarily favoured. The universe doesn't, as far as we know, have any other universes to contend with.

When it comes time for material reality to transcend itself into the very unknown thing that materialists themselves want to deny exists, I believe we witness materialists themselves confronting the 'woo-z-ness' of their own belief system.

Eastern religions are fond of imagining an "unlimited field of immanence," or "ground of all being" - viz Brahman, Shunyata or similar concepts. But they didn't have science back then. I imagine they were trying to expand their existing Idealist philosophy to integrate what meditators and mystics spoke about and so they had to create something for manifestation to emerge from. Something which of course could not itself be manifest.
With the rise of materialism so we now have the "neural substrate." We know consciousness emerges from this. There may still be a "ground of all being," but with the collapse of Idealism so any need for it has vastly receeded.

Nick
 
ahhh...lucidity

Remember, philosophy works within limits (and should always state them). A philosophy, which is what materialism is, doesn't explain every detail. It simply follows assumptions, and sketches out the results of those assumptions as best it can. So the materialist account of consciousness isn't a technical manual for creating consciousness from matter, it's a rough outline how the manual might be written, what sorts of things will need to be included. A full technical manual for creating consciousness doesn't exist yet; possibly never will.

I follow that, and agree until you hit the part about 'creating consciousness'. I'm not sure you have agreement with that in the A.I. camp. Dennet himself is pretty clear on the matter about consciousness emerging from computers (by accident, if not a clear 'manual' as you claim) and even claiming some computers now may have a rudimentary consciousness.

The debate among philosophies (materialism / physicalism, idealism, dualism, etc.) and various models within those philosophies is over which assumptions produce the most complete, coherent and consistent, outlines; which set of assumptions agrees best with facts we have about the world; which conclusions from which set of assumptions agree best with the facts we have about consciousness. Even if the logic is flawless (which is always arguable), the outline will be incomplete, because the data is. Applies to all philosophies.

Yes, I can see that, the objective data is incomplete. However only the materialistic philosophies (from the 18th century til the early 21st - I consider my philosophy, futurism, to be a form of materialistic philosophy and not one I found most materialists can account for) rely on the objective data, while other philosophies seek to incorporate objective data with subjective experience, which is always personal. Dennet doesn't seem to agree that subjective experience is off limits, he seems to think that we can actually know exactly what subjective experience is. Problem is the two schools of thought here don't seem to agree on what subjective experience actually is. It's not that there is disagreement, it's that there is confusion about what each other means.

One of the problems I have with the hard edged materialism is not so much the conclusion, but how the conclusion is explained, and then frames the philosophy. For example, this following statement is technically false. "Consciousness emerges from the brain." That statement is incomplete. Technically false. Consciousness does not emerge from the brain, but rather, following the materialistic model, consciousness emerges from brains, plural. It emerged over a period I assume of millions of years inside of a vast network of nervous systems that interact with each other. I don't think consciousness was something like the hundredth monkey, where one organism developed it, passed it on or taught other members of the species how to flip the switch so to speak. Consciousness as we understand it is not only self reflective, but group reflective, it needs an 'other' to not only define itself (even if falsely). It may not emerge from me, it may emerge from 'us'.

I think that soft difference alone can make all the difference in how the philosophy is framed.


So the materialist claim isn't, "here's consciousness explained: neuron by neuron, bit by bit, quark by quark"; it's "here's what we know about the brain and body and computers; here's what we can conclude from that; look how much it explains! look how it agrees with what we know! not bad, huh? a lot better than those other stupid philosophies..." (materialism's kinda rude sometimes, given its successes in the last few centuries; though some of the things idealism says about it! -- wow -- bitter much?).

But I think it's the only thing you can conclude when looking at it through that particular materialistic medium, and I don't see it accounting for the vast tapestry of subjective experience - which often, to me, just gets shrugged off in communication.


The 'materialist' claim, following the computational model, isn't -- or at least shouldn't be -- that all computation / information processing leads to consciousness. Consciousness should emerge from certain ways of processing info, from running certain types of programs, only. In the same sense, MS Word or Jewel Quest or the JREF bulletin board don't emerge from just any information processing, only from their specific programs processing information.

I don't think the model says it is program specific, I could be wrong here, but I think the model suggests it is pathway, ranking specific. I call it the SEO model of consciousness, using Google as an example. You have a few billion webpages, each one with it's own unique ranking, supported by a network of other webpages. According to the materialistic model, using Google as a metaphor, the first page of search returns IS consciousness.

You might, say 'hey bubblefish, nah, that's not a good model of consciousness, there is nothing in the google algorithm that programs for 'self' or 'awareness'. Well I am not the only person who suggests there is enough complexity on the web for that to happen.

http://www.newscientist.com/article/mg20227062.100-could-the-net-become-selfaware.html

So the materialistic model does not need a program for consciousness to have consciousness. Plus, how could you get a program for self unless by accident if you use the natural selection model? Natural selection is just an algorithm, right?

This is all just leading to this next point,

If the universe is a quantum computer, then it is running a program from which the universe emerges.

that doesn't make any sense to me, how could it run the program before it emerges? If natural selection is just an algorithm, and Google is just an algorithm, and our brains just have vast competing structures that 'win' themselves into be conscious, how could the universe, which is far more vast and complex than any mind can imagine, not 'stumble' upon something similar by it's computations?

And once 'finding itself aware', I would imagine that it, like us, would want to conserve that, prolong it, evolve it, understand it.

I think the universe as conscious computer is perfectly in fit with the materialist model.

That's the only supported inference. Certain systems within the universe -- Bubblefish, Nick227, Limbo, PixyMisa, !Kaggen, macdoc, blobru (well, sometimes, almost) -- would be conscious, because they are running specific programs that generate those particular consciousnesses. The universe would only be conscious if it is running a consciousness program.

I'm not in agreement here that consciousness is 'program' specific. It could be running a program for black holes and quantum fields, and lo : quarks become self aware in the process. I guess by program you mean algorithm, and to me program means something more intentional. I may have a layman's understanding of 'program', so it could also just be semantics here.

"Mustn't the universe be conscious if parts or it are?" we might wonder.

As futurists, we MUST wonder about this! Sure, we can say the universe is conscious of itself through us, but the universe might be conscious of itself in a way that we simply cannot understand, by default it would be of a ranking of intelligence far beyond our own, and we would be limited in modeling by the limitation of our own 'program' as you put it.

Well, the universal program would certainly contain consciousness programs. However, a top-level program doesn't automatically take on the functionality of its subprograms (which we would be, if the universe is really a quantum computer). In computer science, this is known as "encapsulation": subprograms do things which the main program isn't concerned with; all it 'sees' is input to and output from the subprograms. So, in a computational model, our consciousness programs would simply be taking in and returning material data to the universal program.

The universe, it's vast quantum fields, far beyond what anyone here is capable of modeling i suspect, is a candidate for consciousness and intelligence based on materialistic models if thought through, in my opinion. I am a futurist after all :)
 
Now you're back with intelligent design, as I see it. Can you find evidence for this? Animals are conscious because they are subject to environmental pressure and consciousness is evolutionarily favoured. The universe doesn't, as far as we know, have any other universes to contend with.

Seth Lloyd at MIT says he does. And he doesn't use 'computer' as a metaphor, as in the universe is like a computer, he says flat out that it IS a computer.

I'm just comparing materialistic models, Nick, and trying to follow them through to what seems to me to be logical conclusions


Eastern religions are fond of imagining an "unlimited field of immanence," or "ground of all being" - viz Brahman, Shunyata or similar concepts. But they didn't have science back then.

that is because they EXPERIENCE them, and the truth value is personal, not objective. Not really fair comparison.

I imagine they were trying to expand their existing Idealist philosophy to integrate what meditators and mystics spoke about and so they had to create something for manifestation to emerge from. Something which of course could not itself be manifest.
With the rise of materialism so we now have the "neural substrate." We know consciousness emerges from this. There may still be a "ground of all being," but with the collapse of Idealism so any need for it has vastly receeded.

I think that explaining consciousness as a possible 'field' of subjectivity is something to consider. A field of subjectivity would be no more elusive to science than Dark Matter is.

Also, when we create models, and they make sense and afford us their predictive power, we still have to consider them just models, and philosophically I believe it's the best bet never to achieve 100% certainty in our usages in any of them, not until we can create cells, life, consciousness, and black holes. Until we have that level of understanding (and I believe in our distant future we will ) we should all remain a little skeptical, since we are making extraordinary claims about reality, all of which require extraordinary evidence.
 
The 'materialist' claim, following the computational model, isn't -- or at least shouldn't be -- that all computation / information processing leads to consciousness. Consciousness should emerge from certain ways of processing info, from running certain types of programs, only.

Hi Blobru,

Can you explain this more? I don't understand why certain ways of processing should not be considered consciousness.

Nick
 
that is because they EXPERIENCE them, and the truth value is personal, not objective. Not really fair comparison.

I don't see that it's to do with subjectivity. When Vedanta, for example, attempts to integrate mystical awareness into its cosmology, it undergoes precisely the same process the brain does when it tries to create an experience from what is there. It has to render what is there into subject-object relationships. It has to package it. And when you try to integrate mystical awareness into idealism you end up with Brahman - the ground of being. In materialism, like I say, you have the neural substrate. The need for a transcendent of this nature is lost.

Nick
 
ahhh...lucidity

I follow that, and agree until you hit the part about 'creating consciousness'. I'm not sure you have agreement with that in the A.I. camp. Dennet himself is pretty clear on the matter about consciousness emerging from computers (by accident, if not a clear 'manual' as you claim) and even claiming some computers now may have a rudimentary consciousness.

Sure. It's not a matter of design, necessarily. It may happen by accident. If a computer, in the way it processes information, crosses the threshhold we define consciousness by, achieves enough of whatever is required for "consciousness" -- then it is conscious (even though it was only made to monitor systems and output advice, perhaps).

Yes, I can see that, the objective data is incomplete. However only the materialistic philosophies (from the 18th century til the early 21st - I consider my philosophy, futurism, to be a form of materialistic philosophy and not one I found most materialists can account for) rely on the objective data, while other philosophies seek to incorporate objective data with subjective experience, which is always personal. Dennet doesn't seem to agree that subjective experience is off limits, he seems to think that we can actually know exactly what subjective experience is. Problem is the two schools of thought here don't seem to agree on what subjective experience actually is. It's not that there is disagreement, it's that there is confusion about what each other means.

The debate over "qualia" -- what something 'feels like' to the individual -- is a lively one, alright. It could be that human experience, for example, is made up of a sensory language that is unique to embodied, biological systems; that a computer, lacking biology, would experience things in a very different way -- solely as math, perhaps; but does that really count as "experience"? If not, it may be that consciousness is limited to biological systems, and computers won't ever be "conscious" (again that would depend how we define it: are qualia essential to consciousness? Do qualia even exist?) There are all sorts of open questions within the philosophy, mostly because there are no standard definitions for "consciousness", "experience", "qualia", etc.

One of the problems I have with the hard edged materialism is not so much the conclusion, but how the conclusion is explained, and then frames the philosophy. For example, this following statement is technically false. "Consciousness emerges from the brain." That statement is incomplete. Technically false. Consciousness does not emerge from the brain, but rather, following the materialistic model, consciousness emerges from brains, plural. It emerged over a period I assume of millions of years inside of a vast network of nervous systems that interact with each other. I don't think consciousness was something like the hundredth monkey, where one organism developed it, passed it on or taught other members of the species how to flip the switch so to speak. Consciousness as we understand it is not only self reflective, but group reflective, it needs an 'other' to not only define itself (even if falsely). It may not emerge from me, it may emerge from 'us'.

I think that soft difference alone can make all the difference in how the philosophy is framed.

I don't think the brain is framed as if it is alone in the universe. Many of its systems (language, for one) presuppose other 'brains' to interact with. The brain is assumed to have evolved within nature, which assumes other members of the species (else the species goes extinct pretty quickly).

But I think it's the only thing you can conclude when looking at it through that particular materialistic medium, and I don't see it accounting for the vast tapestry of subjective experience - which often, to me, just gets shrugged off in communication.

It's very difficult to explain things bottom up (experience is as high level as you get in a materialist model). And I don't think materialism has found very good metaphors for explaining experience within its framework of integrated parts and synchronous processes creating systems which then become parts and processes in more complex systems and so forth. So it often gets "shrugged off"... I agree. That's why the debate is still active: anti-materialists say you can't explain it! -- therefore it's impossible. Meanwhile, many materialists believe they have explained it. Well, maybe they have; just not very well... (arguably).

I don't think the model says it is program specific, I could be wrong here, but I think the model suggests it is pathway, ranking specific. I call it the SEO model of consciousness, using Google as an example. You have a few billion webpages, each one with it's own unique ranking, supported by a network of other webpages. According to the materialistic model, using Google as a metaphor, the first page of search returns IS consciousness.

As above, it's not a matter of a specific program design (it can't be at present, because we don't know to do it). It's a matter of a program achieving enough complexity, self-reference, etc. to qualify as "conscious". Google may be a good prototype for a system which, in a more complex form, could become conscious (and a page of search returns, in that case, would be a conscious experience for super-Google).

You might, say 'hey bubblefish, nah, that's not a good model of consciousness, there is nothing in the google algorithm that programs for 'self' or 'awareness'. Well I am not the only person who suggests there is enough complexity on the web for that to happen.

http://www.newscientist.com/article/mg20227062.100-could-the-net-become-selfaware.html

No argument, here. The discovery of a 'self' or 'awareness' algorithm will almost certainly be, as most discoveries are, serendipitous. Google's an excellent candidate.

So the materialistic model does not need a program for consciousness to have consciousness. Plus, how could you get a program for self unless by accident if you use the natural selection model? Natural selection is just an algorithm, right?

Yes. The program may occur accidentally, stumbled on by human researchers, or possibly evolving out computer systems that mimic evolution (into a final form that we don't understand, possibly). Or, it may be from a specific design. All sorts of ways.

This is all just leading to this next point,

that doesn't make any sense to me, how could it run the program before it emerges? If natural selection is just an algorithm, and Google is just an algorithm, and our brains just have vast competing structures that 'win' themselves into be conscious, how could the universe, which is far more vast and complex than any mind can imagine, not 'stumble' upon something similar by it's computations?

It has. Us. But our encapsulated algorithm is different from its. Ours is conscious. Its may lack self-reference (remember, our self-reference is internal to our system).

And once 'finding itself aware', I would imagine that it, like us, would want to conserve that, prolong it, evolve it, understand it.

I think the universe as conscious computer is perfectly in fit with the materialist model.

It's not ruled out. But, at present, we don't observe it acting like entities we consider conscious (learning and communicating in an environment, for example), so it doesn't seem a good candidate.

I'm not in agreement here that consciousness is 'program' specific. It could be running a program for black holes and quantum fields, and lo : quarks become self aware in the process. I guess by program you mean algorithm, and to me program means something more intentional. I may have a layman's understanding of 'program', so it could also just be semantics here.

As discussed, consciousness is not program-specific. It is, though, almost certainly (unless "Hello, World!" is conscious) class-specific: there are certain properties (vast complexity, massive parallelism, self-reference, etc) the program must have to belong to the class; many programs will belong to the class; but many, many more will not.

As futurists, we MUST wonder about this! Sure, we can say the universe is conscious of itself through us, but the universe might be conscious of itself in a way that we simply cannot understand, by default it would be of a ranking of intelligence far beyond our own, and we would be limited in modeling by the limitation of our own 'program' as you put it.

It may be. Current observations suggest it isn't. But you never know.

The universe, it's vast quantum fields, far beyond what anyone here is capable of modeling i suspect, is a candidate for consciousness and intelligence based on materialistic models if thought through, in my opinion. I am a futurist after all :)

Everything's a candidate for consciousness: rocks, toasters, popsicles, bacteria, the universe. Some just aren't very good candidates.

Hi Blobru,

Can you explain this more? I don't understand why certain ways of processing should not be considered consciousness.

Nick

Hey conscious system labeled "Nick227" (it never ends, eh). :D;)

As above. I mean it may be that consciousness only emerges when certain things are present -- complexity, self-reference, life?, etc -- and that most systems of information processing don't have these, or enough, in the right order, to be "conscious" (unconscious decision-making, natural selection, the beach, big toenails, etc).

Maybe he's referring to - dun dun dun! - self-reference. ;)

Spoiler! :mad::p
 
Last edited:
Sure. It's not a matter of design, necessarily. It may happen by accident. If a computer, in the way it processes information, crosses the threshhold we define consciousness by, achieves enough of whatever is required for "consciousness" -- then it is conscious (even though it was only made to monitor systems and output advice, perhaps).

so any self referencing system can potentially become conscious with enough complexity. Okay, we are in agreement here.


The debate over "qualia" -- what something 'feels like' to the individual -- is a lively one, alright. It could be that human experience, for example, is made up of a sensory language that is unique to embodied, biological systems; that a computer, lacking biology, would experience things in a very different way -- solely as math, perhaps; but does that really count as "experience"? If not, it may be that consciousness is limited to biological systems, and computers won't ever be "conscious" (again that would depend how we define it: are qualia essential to consciousness? Do qualia even exist?) There are all sorts of open questions within the philosophy, mostly because there are no standard definitions for "consciousness", "experience", "qualia", etc.

One of the problems, I believe, in the communication, is that the experience ( i refuse to use words like "qualia" because I think they are often more confusing than common language in this regard) can transcend sensations, feelings and involve vast stories or epic proportions, including textures, smells, millions of colors, experiences with 'other' intelligences that deliver some true knowledge and some false knowledge, poetry, and most perplexing of all,humor. I don't even like to use the word consciousness either actually. I think experience summarizes it all perfectly.



I don't think the brain is framed as if it is alone in the universe. Many of its systems (language, for one) presuppose other 'brains' to interact with. The brain is assumed to have evolved within nature, which assumes other members of the species (else the species goes extinct pretty quickly).

I'm glad we agree that there is an apparent to dimension to experience that does not appear as the physical property that claims to account for it. There is a 'jump' in the logical of material reality, which appears summarized perfectly as A=A. Introduce consciousness, and all of a sudden you get A = :) - very bizarre to me.

When I read all the literature on the matter, and I hear the talk at all the philosophy cocktail parties, I only ever hear the word 'brain', never a network of brains. Could consciousness emerge from one self referencing system with enough complexity or does it need a network of 'other selves' doing the same thing? That's a question I have not heard an answer to. I think it's a damn good and relevant question too.


It's very difficult to explain things bottom up (experience is as high level as you get in a materialist model). And I don't think materialism has found very good metaphors for explaining experience within its framework of integrated parts and synchronous processes creating systems which then become parts and processes in more complex systems and so forth. So it often gets "shrugged off"... I agree. That's why the debate is still active: anti-materialists say you can't explain it! -- therefore it's impossible. Meanwhile, many materialists believe they have explained it. Well, maybe they have; just not very well... (arguably).

how is it that we can't just allow a little room for mystery there and keep our models a bit more open? Why do we need to assume that things must be a certain way or that things must make sense to us for them to exist?

Dark Matter is a head f*ck. Consciousness can be a head f*ck too. I believe we can explain these things when we open up our models a bit more to allow for the unknown to actually function as an entity in the system. (bubblefish ducks for cover)


As above, it's not a matter of a specific program design (it can't be at present, because we don't know to do it). It's a matter of a program achieving enough complexity, self-reference, etc. to qualify as "conscious". Google may be a good prototype for a system which, in a more complex form, could become conscious (and a page of search returns, in that case, would be a conscious experience for super-Google).

I think once we introduce any form of competition, self reference may be eventual...would you agree with this?


No argument, here. The discovery of a 'self' or 'awareness' algorithm will almost certainly be, as most discoveries are, serendipitous. Google's an excellent candidate.



that doesn't make any sense to me, how could it run the program before it emerges? If natural selection is just an algorithm, and Google is just an algorithm, and our brains just have vast competing structures that 'win' themselves into be conscious, how could the universe, which is far more vast and complex than any mind can imagine, not 'stumble' upon something similar by it's computations?
It has. Us. But our encapsulated algorithm is different from its. Ours is conscious. Its may lack self-reference (remember, our self-reference is internal to our system).

I don't believe we have enough knowledge to make the distinction between our algorithm and any potential algorithm the universe may have at it's disposal. I don't think that comparison can be made in any meaningful way because we have no meaningful way we could model it, other than poetry.

Besides, there are all sorts of ways universe could find self reference, even from vast civilizations who decide they can hack it and program it to have consciousness. I mean, we just don't flat out know. But we do know that self referencing systems would tend to produce consciousness according to materialistic models of consciousness. I'm not ready to say that human being is the only self referencing model that universe can produce.

and even if we are... well the universe is so vast and potentially infinite it's not likely it would be solely limited to us earthlings. Given that sort of environment, anything that can happen, will happen an infinite number of times. We could turn out to be the actual programming entity of universe itself, which in a way would both nullify and harmonize both of our points of view here.

It's not ruled out. But, at present, we don't observe it acting like entities we consider conscious (learning and communicating in an environment, for example), so it doesn't seem a good candidate.

I don't think that's a fully considered POV on the matter. There are a few problems I find with it, help me here if I am mistaken. how could we observe a more intelligent entity or agency than us? Especially one that is potentially non biological. We would not have any reference point, so not finding it is not surprising. And hardly anyone is really looking! (come on blobru, which philosophers have which research grants to test the proposition that a super computer universe could, should, should not or must not be aware?)

also, any reference point in such a field is by default self referencing, right?

and besides, when certain philosophical, scientific people do look at this question, they produce things like this

http://www.simulation-argument.com/simulation.html

Lol - and isn't the simulation argument intelligent design from another angle? (bubblefish teases the local skeptics)


It may be. Current observations suggest it isn't. But you never know.

what current observations are you talking about here? Do you mean your observations, or are you referring to some body proper, a think tank maybe? I am unaware that this issue is even readily addressed in philosophy much. Every time one of us futurists brings it to the table, we usually get teased and the kids stop playing with us.


Everything's a candidate for consciousness: rocks, toasters, popsicles, bacteria, the universe. Some just aren't very good candidates.

Is it fair to consider Google as a candidate for consciousness and not the entire friggin' universe? Is it fair to say that both toasters and the universe are in the same set of reference? Does that seem reasonable to you? Do you just mean the 'word' universe is not a good candidate? Because us philosophers say the word universe without much consideration to what it is that we are implying, and I agree, the word 'universe' may not be a good candidate for a self referencing system.
 
Who is there to have it, without thinking?


Yes indeed.

Thinking packages sensory data into subject-object format.


Is this "sensory consciousness without thought"? Or with?

Meditators will tell you that sensory information is fundamentally non-dual. And materialism reinforces this.


Fine with me. Monism it is. :D

"Instead of the stream of thought being one of con-sciousness, 'thinking its own existence along with whatever else it thinks'...it might better be called a stream of Sciousness pure and simple, thinking objects of some of which it makes what it calls a 'Me,' and only aware of its 'pure' Self in an abstract, hypothetic or conceptual way. Each 'section' of the stream would then be a bit of sciousness or knowledge of this sort, including and contemplating its 'me' and its 'not-me' as objects which work out their drama together, but not yet including or contemplating its own subjective being." -William James
 
Last edited:
Hey Blobru: Thanks again for taking the time with me, I wanted to address something here I was not sure I made clear

It has. Us. But our encapsulated algorithm is different from its. Ours is conscious. Its may lack self-reference (remember, our self-reference is internal to our system).

This statement you make is based on a truth value that consciousness IS purely based on our individual internal self reference. I contest that may not be a complete way to look at it, since consciousness emerged from brains, not brain, in groups, packs, tribes, each with a member playing a distinct role. We need others to know we exist, we need others to know we are conscious in the first place! I'm not convinced that the self referencing model as supplied by Dennet is meaningful without other self referencing systems evolving together with it. Where is this addressed in the literature?

What role does groupthink, group mind, hive mind, 'mass hallucination' (which is the skeptical position on things such as the 'Lady of Fatima' sightings in Portugal in 1917) say about a collective emergence of consciousness?. Or the novelty of the 'Flynn Effect' ?http://en.wikipedia.org/wiki/Flynn_effect

Concepts like 'morphic fields', 'noosphere', 'collective unconscious' 'historical dialectic' (hegel), "Gaia" (Lovelock), 'co-intelligence'... all of these western ideas, I believe, share something in common which seeks to address this. Some of them may fail, but at least they are seeking to address and model it. I don't find the materialistic models addressing these things to any meaningful satisfaction, which is how I am still skeptical of the claims of Dennet, Bear, and that whole crowd. They seem limited by the steps reductionism has defined for them. Consciousness simply does not emerge from a brain!

And one last gripe - hah, forgive me, today I am suffering a headache. One last gripe is the materialistic model just seems to address the 'observer' and not the 'unconscious' mind. What about the unconscious mind? Are they saying it doesn't exist? I can't seem to get a decent answer from anyone on that.

Don't worry, I have more donuts:) thank you my friend for taking the time with me here, this is a big part of my life, and what you say to me may wind up in a book one day. Creative Commons :)
 
Last edited:
Meditators will tell you that sensory information is fundamentally non-dual. And materialism reinforces this.

huh? that is an absolutely poor connection! the EXPERIENCE is what is non dual, NOT the information! and since there are two clear distinct schools of thought addressing this topic, it's not fair to say somehow that both schools are saying the same thing regarding
dualism/non dualism. Dualism exists in the mind and is projected onto nature because nature makes sense in binary. The pairs of opposites are a repeating theme in many 'inner' traditions, both eastern and western. While the west does not find this repeating pair of couplings in both our minds and scattered throughout nature and the laws of the universe to be significant, inner traditions do.
 
Bubblefish said:
Sure. It's not a matter of design, necessarily. It may happen by accident. If a computer, in the way it processes information, crosses the threshhold we define consciousness by, achieves enough of whatever is required for "consciousness" -- then it is conscious (even though it was only made to monitor systems and output advice, perhaps).

so any self referencing system can potentially become conscious with enough complexity. Okay, we are in agreement here.

Possibly, with enough of certain types of complexity. Massive parallelism (doing many small tasks simultaneously for one big task) seems crucial as well, for example.

The debate over "qualia" -- what something 'feels like' to the individual -- is a lively one, alright. It could be that human experience, for example, is made up of a sensory language that is unique to embodied, biological systems; that a computer, lacking biology, would experience things in a very different way -- solely as math, perhaps; but does that really count as "experience"? If not, it may be that consciousness is limited to biological systems, and computers won't ever be "conscious" (again that would depend how we define it: are qualia essential to consciousness? Do qualia even exist?) There are all sorts of open questions within the philosophy, mostly because there are no standard definitions for "consciousness", "experience", "qualia", etc.

One of the problems, I believe, in the communication, is that the experience ( i refuse to use words like "qualia" because I think they are often more confusing than common language in this regard) can transcend sensations, feelings and involve vast stories or epic proportions, including textures, smells, millions of colors, experiences with 'other' intelligences that deliver some true knowledge and some false knowledge, poetry, and most perplexing of all,humor. I don't even like to use the word consciousness either actually. I think experience summarizes it all perfectly.

Well, whatever you call them, base sensory and symbolic categories do integrate, via logic, memory, and association, into larger meaningful categories, semantic context, stories if you will. Similar to what AI calls "frames"; and if AI has a hard problem, it's how to switch between them.

I don't think the brain is framed as if it is alone in the universe. Many of its systems (language, for one) presuppose other 'brains' to interact with. The brain is assumed to have evolved within nature, which assumes other members of the species (else the species goes extinct pretty quickly).

I'm glad we agree that there is an apparent to dimension to experience that does not appear as the physical property that claims to account for it. There is a 'jump' in the logical of material reality, which appears summarized perfectly as A=A. Introduce consciousness, and all of a sudden you get A = :) - very bizarre to me.

If your :) is the complex of feelings and emotions we associate with it (a state of 'happiness'), and is treated as information, a proposition even (the body telling the brain [or the brain telling the brain, or the body, etc: think message-passing between embodied systems], "things are fine"), it's not so bizarre.

When I read all the literature on the matter, and I hear the talk at all the philosophy cocktail parties, I only ever hear the word 'brain', never a network of brains. Could consciousness emerge from one self referencing system with enough complexity or does it need a network of 'other selves' doing the same thing? That's a question I have not heard an answer to. I think it's a damn good and relevant question too.

For computers, of course, networking can be a way of building one big computer, where the computers are all dedicated to the same task, and have global access to information. Human "language", which our distinct embodiment entails, may limit how much global access a group of people can have (we can share descriptions of experiences, but not the experiences themselves). The differences are relevant and interesting. Not sure if the philosophers at these parties have had too many cocktails or what... it's the sort of the thing philosophers tend to love to discuss (John Searle, for one, has written extensively on collective intentionality).

It's very difficult to explain things bottom up (experience is as high level as you get in a materialist model). And I don't think materialism has found very good metaphors for explaining experience within its framework of integrated parts and synchronous processes creating systems which then become parts and processes in more complex systems and so forth. So it often gets "shrugged off"... I agree. That's why the debate is still active: anti-materialists say you can't explain it! -- therefore it's impossible. Meanwhile, many materialists believe they have explained it. Well, maybe they have; just not very well... (arguably).

how is it that we can't just allow a little room for mystery there and keep our models a bit more open? Why do we need to assume that things must be a certain way or that things must make sense to us for them to exist?

That's the whole point of a model. Each model makes certain assumptions to see where they lead, how much can be explained by them. As a model, it's always 'open' by default (to correction, and to competition with other models). No empirical model is presumed absolutely true; some just explain facts better than others. Where no model is absolutely true, there is always mystery. Kind of pointless to allow for what it is always there.

Dark Matter is a head f*ck. Consciousness can be a head f*ck too. I believe we can explain these things when we open up our models a bit more to allow for the unknown to actually function as an entity in the system. (bubblefish ducks for cover)

You should. Because you've just described algebra and formal logic (which are based on allowing for the unknown to function as an entity within the system).

As above, it's not a matter of a specific program design (it can't be at present, because we don't know to do it). It's a matter of a program achieving enough complexity, self-reference, etc. to qualify as "conscious". Google may be a good prototype for a system which, in a more complex form, could become conscious (and a page of search returns, in that case, would be a conscious experience for super-Google).

I think once we introduce any form of competition, self reference may be eventual...would you agree with this?

Inevitable, you mean? As long as the competition is between things that evolve, whereever self-reference confers a local advantage, it will be selected for.

No argument, here. The discovery of a 'self' or 'awareness' algorithm will almost certainly be, as most discoveries are, serendipitous. Google's an excellent candidate.
that doesn't make any sense to me, how could it run the program before it emerges? If natural selection is just an algorithm, and Google is just an algorithm, and our brains just have vast competing structures that 'win' themselves into be conscious, how could the universe, which is far more vast and complex than any mind can imagine, not 'stumble' upon something similar by it's computations?
It has. Us. But our encapsulated algorithm is different from its. Ours is conscious. Its may lack self-reference (remember, our self-reference is internal to our system).

I don't believe we have enough knowledge to make the distinction between our algorithm and any potential algorithm the universe may have at it's disposal. I don't think that comparison can be made in any meaningful way because we have no meaningful way we could model it, other than poetry.

Algorithms have output. That output is evidence that the algorithm exists (as the fossil record is superb evidence for the algorithm of natural selection). We distinguish very easily between conscious behavior (two people playing chess, for example) and unconscious (a meteor falling on their heads). If the universe has an algorithm which generates universal consciousness, it is keeping it well hidden.

Besides, there are all sorts of ways universe could find self reference, even from vast civilizations who decide they can hack it and program it to have consciousness. I mean, we just don't flat out know. But we do know that self referencing systems would tend to produce consciousness according to materialistic models of consciousness. I'm not ready to say that human being is the only self referencing model that universe can produce.

Nobody is. Because nobody flat out knows (even if many religious claim to). All models are guesses. Some just happen to explain things better than others. Natural selection is an excellent model. If consciousness comes from natural selection, it's hard to see why the universe should be conscious (what is it competing with? for what resources, in what environment? how is it reproducing, (and with whom?)?). Maybe universal consciousness has come about some other way; however, until we see collective intentional behavior from the universe, rationally speaking, the chances seem slim.

and even if we are... well the universe is so vast and potentially infinite it's not likely it would be solely limited to us earthlings. Given that sort of environment, anything that can happen, will happen an infinite number of times. We could turn out to be the actual programming entity of universe itself, which in a way would both nullify and harmonize both of our points of view here.

"[A]nything that can happen, will happen" doesn't mean anything can happen, though. Starting conditions restrict outcomes. The decimal expansion of 1/3 is infinite, but severely restricted. Maybe we can create other universes (video games already are, in a limited sense); whether those universes can be made conscious is another matter entirely (it may be that the sort of tight-packed complexity essential to consciousness is impossible for such an incredibly sparse and dispersed thing as a universe). Many speculative avenues; many potential, and potentially terminal, obstacles.

It's not ruled out. But, at present, we don't observe it acting like entities we consider conscious (learning and communicating in an environment, for example), so it doesn't seem a good candidate.

I don't think that's a fully considered POV on the matter. There are a few problems I find with it, help me here if I am mistaken. how could we observe a more intelligent entity or agency than us? Especially one that is potentially non biological. We would not have any reference point, so not finding it is not surprising. And hardly anyone is really looking! (come on blobru, which philosophers have which research grants to test the proposition that a super computer universe could, should, should not or must not be aware?)

Besides its lack of collective conscious behavior, it's not a good candidate for other reasons given above (apparent lack of evolutionary pressure for it to evolve consciousness; immense empty spaces which may make tightly-packed complexity, required for massive parallelism, impossible).

also, any reference point in such a field is by default self referencing, right?

No (needs a 'feedback loop' to accept its own output as input).

and besides, when certain philosophical, scientific people do look at this question, they produce things like this

http://www.simulation-argument.com/simulation.html

Lol - and isn't the simulation argument intelligent design from another angle? (bubblefish teases the local skeptics)

I doubt they will feel too teased, since there have been quite a few threads discussing Bostrom's simulation argument. If its assumptions are correct (big if), it doesn't tell us anything about the virtual universe we live in being conscious; it would be only if the top-level universe it is simulating is, and so should be prey to the exact same objections raised above.

It may be. Current observations suggest it isn't. But you never know.

what current observations are you talking about here?

Discussed above: lack of collective conscious behavior; lack of evolutionary pressure; lack of apparent system-wide, tightly-packed complexity.

Do you mean your observations, or are you referring to some body proper, a think tank maybe? I am unaware that this issue is even readily addressed in philosophy much. Every time one of us futurists brings it to the table, we usually get teased and the kids stop playing with us.

Well, it's fairly easy to list the objections against it, so it's a pretty cold case. Nevertheless, it's been discussed by scores of philosophers, from ancient presocratic, stoic, vedanta up through Plotinus, Spinoza, Liebniz, Schopenhauer, Whitehead; most recently, Galen StrawsonWP. If you're being teased when you bring it to their table, it may be lack of familiarity with their subject.

Everything's a candidate for consciousness: rocks, toasters, popsicles, bacteria, the universe. Some just aren't very good candidates.

Is it fair to consider Google as a candidate for consciousness and not the entire friggin' universe?

Both are considered. Reasons why one is a better candidate than the other are given. With new info, both are reconsidered. And so on.

Is it fair to say that both toasters and the universe are in the same set of reference? Does that seem reasonable to you? Do you just mean the 'word' universe is not a good candidate? Because us philosophers

Okay. I'm going to have to stop you right here, I'm afraid. I really don't think, Bubblefish, you have demonstrated anywhere near enough familiarity with the subject matter of philosophy to include yourself in the group, "us philosophers". One never knows, of course. Like the universe and "consciousness", it may be that there is evidence hidden somewhere to suggest you do merit inclusion in the "philosopher" category; but, if so, it is very well hidden indeed, especially by this latest post...

say the word universe without much consideration to what it is that we are implying, and I agree, the word 'universe' may not be a good candidate for a self referencing system.

...and I'm saying that to be helpful, not mean (seriously). If you're interested in the topic ("consciousness", or philosophy in general), read some books [carefully], do some study, take some courses if need be to master the basics at least, and then maybe "us philosophers" will have some more to talk about, without having endlessly to go over them. Okay? :)
 
Last edited:
so any self referencing system can potentially become conscious with enough complexity. Okay, we are in agreement here
Possibly, with enough of certain types of complexity. Massive parallelism (doing many small tasks simultaneously for one big task) seems crucial as well, for example.

okay, I follow, makes sense

Well, whatever you call them, base sensory and symbolic categories do integrate, via logic, memory, and association, into larger meaningful categories, semantic context, stories if you will. Similar to what AI calls "frames"; and if AI has a hard problem, it's how to switch between them.

I think my point was more to the fact that there is a depth and dimension in experience, which is potentially infinite or close to infinite, that does not appear to be accounted for in the language of materialism as I have found it thus far. I don't agree they can be summarized as merely 'base sensory and symbolic categories', I don't think that gives them proper justice in consideration.


If your :) is the complex of feelings and emotions we associate with it (a state of 'happiness'), and is treated as information, a proposition even (the body telling the brain [or the brain telling the brain, or the body, etc: think message-passing between embodied systems], "things are fine"), it's not so bizarre.

Sure, if we treat it is information, but that's bringing it back down to A=A so it can be analyzed logically and mathematically, that's not what I am referring to, I am referring to the experience of information, and material reality bursts into this 'dimension' of experience that transcends A=A. A neuron firing is a neuron firing, the experience of what that delivers, however, the imagination of it, is not proportional in dimension to a neuron firing. This seems to be the consistent breakdown in understanding between materialists and non materialists. Since I consider myself both, I find it twice as frustrating.

For computers, of course, networking can be a way of building one big computer, where the computers are all dedicated to the same task, and have global access to information.

agreed. We can't have the internet with just one computer, the internet is the culmination of all computers linking into it. I use the internet because it potentially may be a meaningful metaphor to equate with consciousness. So to me, it's like hearing a bunch of people claiming that the internet emerges from a computer. That's the point I am trying to make.

Human "language", which our distinct embodiment entails, may limit how much global access a group of people can have (we can share descriptions of experiences, but not the experiences themselves). The differences are relevant and interesting. Not sure if the philosophers at these parties have had too many cocktails or what... it's the sort of the thing philosophers tend to love to discuss (John Searle, for one, has written extensively on collective intentionality).

I don't think I communicated my point properly. I don't mean the body of philosophy, I mean philosophical discussions with educated people discussing the basis for a material model of consciousness. That's it. And I was being snarky, so my apologies, I had a killer headache all day - what i mean is when I review the literature and books (yes have plenty to still digest), when I view the lectures and the TED talks and interviews online, when I read the articles, when I read the general discussion on the matter, the 'common' discussion, i note the consistent claim, the consistent model uses 'brain' in the singular. That's not only influential to the individual philosophy and understanding of the framework of those communicating it that way, but it's also influential in sharing the idea to others and how they frame the idea itself. I'm sure there are philosophers out there that think about these things, and write long extended papers for their philosophy chums and colleagues, but that's not what I am referring to.

The point is that a model of consciousness is being presented to the general public and students of science and philosophy in particular for consideration, a story or a narrative is being repeated that simply can be misleading conceptually. That's my only point.

Here is how I wrote it previously : When I read all the literature on the matter, and I hear the talk at all the philosophy cocktail parties, I only ever hear the word 'brain', never a network of brains. Could consciousness emerge from one self referencing system with enough complexity or does it need a network of 'other selves' doing the same thing? That's a question I have not heard an answer to. I think it's a damn good and relevant question too.

and you said


That's the whole point of a model. Each model makes certain assumptions to see where they lead, how much can be explained by them. As a model, it's always 'open' by default (to correction, and to competition with other models). No empirical model is presumed absolutely true; some just explain facts better than others. Where no model is absolutely true, there is always mystery. Kind of pointless to allow for what it is always there.

okay, I like that you allow for the mystery there, but you did not directly address my question, and one where I am getting stopped in digesting the materialistic model. Could consciousness emerge from one self referencing system alone? because the answer I seem to get is 'yes' when I simply look at the language being used to describe a materialistic model of consciousness. I'm sure philosophers you know may love to discuss it, but I just asked a question and seeing if you had an answer, or what your POV was on the matter. Or maybe a link to a paper that covers this that I am unaware of. I still think my question is very relevant, and if this is common knowledge or deeply considered, I have not found it, yet.


You should. Because you've just described algebra and formal logic (which are based on allowing for the unknown to function as an entity within the system).

that's not what I mean, I may have been a little too snarky in this past exchange. My point is that we may need to allow for the 'mystery' in our conceptual models of consciousness and reality proper as a permanent structure of the universe and consciousness itself


Inevitable, you mean?
yes, thank you

As long as the competition is between things that evolve, whereever self-reference confers a local advantage, it will be selected for.

okay - so we are on the same page then. Thank you for helping me with my descriptive wording of these environments.

Algorithms have output. That output is evidence that the algorithm exists (as the fossil record is superb evidence for the algorithm of natural selection). We distinguish very easily between conscious behavior (two people playing chess, for example) and unconscious (a meteor falling on their heads). If the universe has an algorithm which generates universal consciousness, it is keeping it well hidden.

Oh I agree it could be well hidden, not just by a higher intelligence (which could hide them for any numbers of reasons that we cannot think of, so pointless to guess), but by our own models of intelligence, consciousness, and reality proper. We could be hiding them too. So since anything 'hidden' would naturally fall into the 'mystery' which you agree we should allow for, I don't see how we can reject those sorts of ideas just because we cannot find material evidence for them as of yet. And that was what I meant by my snarky comment about allowing for an unknown entity, it was not an appeal for the creation of algebra - it was an appeal for the consideration of a mysterious or unknown form of intelligence.


Nobody is. Because nobody flat out knows (even if many religious claim to). All models are guesses.

yes

Some just happen to explain things better than others.

...to some, but not all people. I think that's important to consider. Naturally academic models of consciousness are going to appeal to academic people. Naturally models that base intelligence and consciousness on natural selection are going to appeal to western philosophers who accept a hard materialist model because that is the only way their model can possible account for it. So your phrase was a little subjective there.

Natural selection is an excellent model.
to the point of view of western materialists

If consciousness comes from natural selection, it's hard to see why the universe should be conscious (what is it competing with? for what resources, in what environment? how is it reproducing, (and with whom?)?).

galaxies, stars compete for space. galaxies slam into one another, ripping each other apart but also one dominates over the other one, and one galaxy gets consumed by another. I should point out a distinction here, just because the universe can develop consciousness potentially, that should not be considered that it would be 1 intelligence operating throughout infinity, that would actually seem impossible to me, how could it ever model itself? how could it know it even existed? it could be a collection of intelligences, distinct by the fields in which they govern.


Maybe universal consciousness has come about some other way; however, until we see collective intentional behavior from the universe, rationally speaking, the chances seem slim.
...to the point of view of the model which governs that idea. To me that's like saying 'Until SETI finds a radio broadcast from some star somewhere in our single galaxy, the chance that intelligent life exists in the universe seems slim.' The argument just doesn't seem cohesive to me.

A consciousness is not medium specific, right? Like I mentioned earlier in this thread, I understand this concept as 'pattern integrity', which I am using adaptively outside of Bucky Fuller's usage of the term used in design. It's not the medium communicating the pattern integrity that gives it it's 'strength' or 'consciousness', but rather the set of instructions, rules, or angles that define it.

"[A]nything that can happen, will happen" doesn't mean anything can happen, though.

Yes, I am aware of that, my phrase is really specific. Anything that can potentially happen, does happen, an infinite number of times in an infinite/eternal environment.

Starting conditions restrict outcomes.

in an infinite/eternal environment, 'starting conditions' are arbitrary.

The decimal expansion of 1/3 is infinite, but severely restricted.

sure, I can see that, but I don't think 1/3 covers the territory I am describing. A simple number line does, however, provide a simple conceptual framework. On a number line, there are an infinite number of orderings of '1'.

Maybe we can create other universes (video games already are, in a limited sense); whether those universes can be made conscious is another matter entirely.

That's something to inform the AI crowd, although not directly related to my point, it is related to the topic of this thread.

(it may be that the sort of tight-packed complexity essential to consciousness is impossible for such an incredibly sparse and dispersed thing as a universe).

maybe, but there are still many unknowns in the ordering of things both universally and sub-atomically, and that just covers 4% of the universe if we consider the Dark Matter models. so sure, maybe.

Many speculative avenues; many potential, and potentially terminal, obstacles.

well I am speculating on one specific avenue within the framework I provided. Not seeing any obstacles yet that dominate or terminate the consideration process.

Besides its lack of collective conscious behavior, it's not a good candidate for other reasons given above (apparent lack of evolutionary pressure for it to evolve consciousness; immense empty spaces which may make tightly-packed complexity, required for massive parallelism, impossible).

I think I addressed these things already. I think what your doing is projecting a model of intelligence and consciousness that is local and trying to stuff another model of the universe inside of it. That's how it seems to me, it's quite a common thing I run into when discussing this with more academic minded people like yourself. Is it possible I am being to dismissive here?

also, any reference point in such a field is by default self referencing, right?

No (needs a 'feedback loop' to accept its own output as input).

that was a snarky comment I made. I'm really sorry, I was mentally challenged today by massive headache and really a bit short and spiteful in some of my commentary. for example, I said :and besides, when certain philosophical, scientific people do look at this question, they produce things like this http://www.simulation-argument.com/simulation.html Lol - and isn't the simulation argument intelligent design from another angle? (bubblefish teases the local skeptics)


I doubt they will feel too teased, since there have been quite a few threads discussing Bostrom's simulation argument. If its assumptions are correct (big if), it doesn't tell us anything about the virtual universe we live in being conscious; it would be only if the top-level universe it is simulating is, and so should be prey to the exact same objections raised above.

i posted it as a joke, but there is a point there I was making. First off, i think a big IF goes to all assumptions on these matters. the only thing that bonds some of us to some of them and others to another is our philosophy, so it's easy to apply the big IF's to others POV, but it's just as equal to our own. and the point is that there is a big IFF that a materialistic model could account for intelligent design, so I posted that as a joke. sorry :)


Discussed above: lack of collective conscious behavior; lack of evolutionary pressure; lack of apparent system-wide, tightly-packed complexity.

discussed above as well. as for tightly packed complexity, wow, wouldn't a black hole provide enough of that alone? IFF

Well, it's fairly easy to list the objections against it, so it's a pretty cold case.

Not the first time i have come across them, just haven't heard anything beyond them to the point of where I was inquiring in this thread.

Nevertheless, it's been discussed by scores of philosophers, from ancient presocratic, stoic, vedanta up through Plotinus, Spinoza, Liebniz, Schopenhauer, Whitehead; most recently, Galen StrawsonWP. If you're being teased when you bring it to their table, it may be lack of familiarity with their subject.

I don't think any of those people (with the exception of Strawson who I am unfamiliar with) discussed the philosophical implications of a universe that is a quantum computer as supported by Seth Floyd. and besides, that's not the point. I am not a historian of philosophy, I'm not repeating philosophical arguments I'm hearing somewhere else, i am simply asking questions along my own journey. So not sure your point here, unless my snarkyness effected you, sorry.


Both are considered. Reasons why one is a better candidate than the other are given. With new info, both are reconsidered. And so on.

hoping therefore we can reconsider some of these things together.

Okay. I'm going to have to stop you right here, I'm afraid. I really don't think, Bubblefish, you have demonstrated anywhere near enough familiarity with the subject matter of philosophy to include yourself in the group, "us philosophers". One never knows, of course. Like the universe and "consciousness", it may be that there is evidence hidden somewhere to suggest you do merit inclusion in the "philosopher" category; but, if so, it is very well hidden indeed, especially by this latest post...

wow, okay that was pretty condescending! Alright, I wrote a snarky response, so maybe I deserve some of that - but that's a bit harsh and a bit ignorant of what you know about my background, but also revealing what you believe about yours. Any thoughtful considering human being engages in philosophy. I have over 25 years of study in science, comparative religion, anthropology, psychology and philosophy, and about 20 years of practice in philosophical areas I would suspect you have never even encountered nor even heard of. I have developed philosophies for as simple and practical things such as the internet and new media in general. I have developed an entire system of dialectic for the internet that has produced incredibly practical results that are very clearly defined, have been sponsored in salons at university by accredited academics, and never once have I ever been told I was not allowed in their 'group'! True, there is still much knowledge for me to consume and consider, I never said i was some sort of philosophical avatar.

...and I'm saying that to be helpful, not mean (seriously). If you're interested in the topic ("consciousness", or philosophy in general), read some books [carefully], do some study, take some courses if need be to master the basics at least, and then maybe "us philosophers" will have some more to talk about, without having endlessly to go over them. Okay? :)

Are you serious? Do you know how long I have been studying consciousness for? How many books I have read in the past 25 years? How many discussions with brilliant minds? How comprehensive my approach has been? do you know how much direct experience I have had with mind and consciousness that transcends virtually all of what you have covered in this discussion?

Do you actually believe the history of Western academic philosophy accounts for the entire body of philosophical thought?

Wouldn't it be a bit crude of me to request that you go study vegetalismo and sit with ayahuasca 50 or so times before could discuss the dynamics of intelligence in nature and consciousness in any meaningful way? Wouldn't it be condescending of me to assume your intellectual approach cannot fathom the depth of being and until you go experience it directly in meditation for 8 years? Wouldn't I be an ******* if I said that because you have not integrated the ideas of Taoism or the martial strategy of Aikido, your simply uninformed of the nuances of human interactions?

Philosophy is many things, but it ain't rocket science, and often it's just a shared set of distinctions using different symbols and language. One just needs to be honest and rational and consistent in their study to be a philosopher. You've been a great help, your a great communicator, but nothing you have said has thrown me for a loop. I follow you and understand you fine. You can think I am a poor philosopher, or an uninformed philosopher, that's fine, I'm trying to be better, but when you turn yourself into a little exclusive group that requires the study of your set of incantations and rituals, your turning philosophy into exclusive priestcraft.

I sort of expected more from you than that!
 
Last edited:
wow, okay that was pretty condescending! Alright, I wrote a snarky response, so maybe I deserve some of that - but that's a bit harsh and a bit ignorant of what you know about my background, but also revealing what you believe about yours. Any thoughtful considering human being engages in philosophy. I have over 25 years of study in science, comparative religion, anthropology, psychology and philosophy, and about 20 years of practice in philosophical areas I would suspect you have never even encountered nor even heard of. I have developed philosophies for as simple and practical things such as the internet and new media in general. I have developed an entire system of dialectic for the internet that has produced incredibly practical results that are very clearly defined, have been sponsored in salons at university by accredited academics, and never once have I ever been told I was not allowed in their 'group'! True, there is still much knowledge for me to consume and consider, I never said i was some sort of philosophical avatar.

No. But I also find that you don't demonstrate much real understanding of the actual subject matter. And for me you do fly off at tangents, possibly to avoid the implications of what is in front of you.

I would like to ask you - can you just "sit on one point" and mull it over? Or do you find your mind just flickering off in another direction? I find you a nice guy, Bubblefish, but I definitely miss that you can construct a simple coherent argument. One without half a million words and multiple tangents.

It would be nice to have a more simple discussion. The materialist vision of consciousness is not complex. I sure wouldn't be bothered to try and understand it if it were. You don't need a real big brain. But it is highly counter-intuitive, and creating complexities and tangents can just be a way of avoiding what's in front of you, as I'm sure you can appreciate.

Nick
 
Last edited:

Back
Top Bottom