• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Has consciousness been fully explained?

Status
Not open for further replies.
Thanks. That’s a different perspective that I’ve heard before. Let me try to restate it: You feel that the 'illusion of consciousness' is the illusion that we can control ourselves via our conscious thoughts. You don't feel that is actually what happens. Have I understood you correctly?

Why do you think that consciousness is not the ‘executive control system’?
Because experimental data indicates otherwise. If consciousness were the executive control system, we would be acting on our conscious decisions before we consciously decide. I think you can see the problem.

The reality is, these decisions are made unconsciously, and then reflected in our consciousness.

Well, we can always compare the behavior of conscious individuals with unconscious individuals. I think that gives us some indication of the influence of consciousness.
That certainly gives you an indication of the trouble you get into if you don't define your terms consistently. An unconscious person is not just someone with their self-awareness turned off - a whole lot of sensory and motor processing, that we would consider unconscious, is also turned off.
 
The reality is, these decisions are made unconsciously, and then reflected in our consciousness.


Is it time yet to replace the word consciousness with the phrase 'evolving playground of competing behavioral impulses monitored against contextual appropriateness'?
 
Last edited:
If thats the case, wouldn't it be more appropriate to say that people's -concept- of consciousness is the illusion rather than saying 'consciousness is an illusion'?

If you like - though generally, the word 'consciousness' is taken to mean people's concept of consciousness, just as a 'table' is generally taken to mean people's concept of a table. If you'd like it to mean something else, please let us know.

While the meaning you attribute to 'consciousness is an illusion' may be a more sensible interpretation the phrase itself is asinine.

OK, you're welcome to think of it as a more sensible interpretation of an asinine phrase.

Of course I'm welcome to. I think wut I wawnt! :D

The word random has two main connotations: A) equiprobability of occurrence or statistical distribution and B) occurring without meaning or purpose; both senses do not do an adequate job of conveying what is meant by 'free will'. 'Unpredictable' seems to be the more apt label.

So can we say purpose (as well as unpredictability) is a feature or requirement of free will?

Pretty much. The behavior of an organism is certainly not random in the quantitative sense since statistical patterns can be observed from the 'outside'. Its also not random in the sense of not having any meaning or purpose because the critter in question 'internally' has motivation(s) and intent.

From the POV of an observer outside of the subjective space of a given free will agent their behavior, while probably exhibiting some statistical pattern(s) over time, will show a level of indeterminacy. One can make tentative guesses at what the agent will do based upon patterns of previous behavior or some psychological understanding of it's motivations but one cannot predict that behavior to an arbitrary degree of accuracy. This is probably due to the fact that there aren't necessarily any linear causal correlations between subjective valuations/interpretations and objective actions; in much the same way that there is no set formula for what meaning(s) will be attributed to some symbol(s). The larger the subjective decision space of the agent in question, the harder it will be to provide accurate predictions concerning it over a given period of time.

Consider a couple of scenarios. A rock-slide kills a group of campers on a mountain. In scenario 1) the avalanche is simply triggered by the structural support of the materials giving way to gravity. In scenario 2) a person triggers the slide with the intent to kill the campers below. In the first scenario there is no conscious will being exerted to manipulate the course of events toward an intended goal, while in the second there is.

OK, so is it fair to say that the key feature to free will is purpose and/or intent? and is it also fair to say that purpose and intent are descriptions of goal-seeking behaviour?

If so, can we say that goal-seeking behaviour in general indicates purpose and/or intent?

If so, does goal-seeking behaviour that is unpredictable indicate free-will in action?

Pretty much. But, like I mentioned earlier, once one starts examining a system in those terms the perspective shifts from the "whats" & "hows" to the "whos" & "whys". In the case of rock slide scenario #2 a relevant "why" would be murderous intent, and the "who" would be the individual from whom the intent originated from. The relevant factors involved are not purely objective which limits the degree to which one can employ deductive methods effectively.
 
I've gotten my familiarity of p-zombies from talks and philosophical discussions like this one (and by watching a couple of Chalmer's lectures); my comments about psychopaths are drawn from those I know personally, and from reading the literature on character disorders. The wording, paraphrasing, parallels, and interpretation are all my own.

[Come to think of it, I suppose the drawing of broad, metaphorically based connections to reach novel conclusions could be considered evidence of consciousness]
You've apparently misunderstood Chalmers: he states that his zombies behave just like us. Otherwise, what would be the point of his thought experiment? (Though I don't blame you-- I can't follow his reasoning on the hard problem.)

From what I've gathered, the p-zombie thought experiment is supposed to demonstrate that one cannot discern subjective experiences from outward physical behavior. But, like I've already mentioned, I do not think such entities are possible in practice and I don't find the thought experiment particularly useful. Whenever p-zombies crop up in these discussions its always someone other than myself that brings them up.

For a psychopath I think you misread it: they are said to lack a conscience (desire to not hurt others, empathy), not consciousness.

Didn't say they lack consciousness. I said that their consciousness is "dim". The range, intensity, quality and depth of their experiences (emotional, and in some cases sensory) tend to be much lower than that of other human beings. In some instances their level of emotional arousal is so low that they are constantly fighting boredom; and the way the majority of them alleviate this boredom is at the expense of others. In other words, they tend to be psychological parasites and predators.

It's quite easy to teach a computer to reach a novel conclusion too, if you're not too concerned about its correctness...

Reminds me of how when a character disordered 'friend' of mine attempts to explain things outside of it's emotional depth all they succeed in doing is spewing nonsensical gobbledygook. For the most part, it can act well enough to fool trained professionals, provided the level of personal interaction is relatively superficial and brief. It has developed a number of tricks over the years to simulate human emotion and is very gifted at picking up languages -- which is an added asset. But, by and large, it has little to no depth of understanding concerning things outside of it's narrow subjective range.

So, like I was saying, if an entity in question has a very limited subjective existence they cannot -generate- coherent and meaningful statements about things beyond it because they don't experience them. They can only attempt to emulate based off of external behaviors and cues. If they have no subjective life at ... Well, I think you get the picture.

My point is that if a simple chat bot, like the one Malerin posted a link to, can answer queries in a syntactically appropriate manner a hypothetical p-zombie could accomplish the same and more. Just to be clear, I do not think that p-zombies (i.e. perfectly indistinguishable simulacrums of conscious individuals) are possible in practice. The ability to pull off such a deception would depend upon the level of sophistication of it's programming [its practically impossible to 'perfectly' simulate any physical system] and the level of discernment possessed by the conscious individual assessing it [sooner or later you'll come across someone sharp enough to recognize that something's fishy].

This is the Turing test, for which I can make a computer do a perfect emulation of a large number of humans: it would simply not respond at all. There's no way you could distinguish it from human who couldn't type (for whatever reason).

The Turing test is good for judging acting ability, but not good for deciding intelligence or especially consciousness. It would be like deciding that a digging tool isn't usable unless it feels warm and hairy.

Thats a key point. The Turing test is only a way to assess how intelligent an agent appears to be but it says nothing about discerning consciousness. The pertinent questions in this regard would be A) does this entity have an 'interior' subjective existence and B) to what extent and what quality?

Data can easily be conditioned into and stored in a neural net, or some other IP system, without there necessarily being any subjective experience of said data. I don't think that tests of recall or rote would be effective tests of consciousness. What one should look for is the ability to self-generate novel behavior and sense making, without reliance on external prompting or explicit programming.

Again, what you appear to be describing is general problem-solving ability: the true Hard Problem.

I'm saying that the subjectivity of consciousness is a crucial component in human and animal problem solving abilities. Heck -- to even speak of there being a 'problem' outside the context of -subjective- valuation is absurd.

At this point, our computers cannot perform physical self-maintenance without guidance from a conscious human, nor can they metabolize. All organisms contain programed feedback mechanisms, to be sure, but they also show a degree of proactive behavior not explicitly programmed into them -- some to greater extents than others. Its this proactivity that sets living/conscious systems apart from present day technological systems.

While, on the one hand, organisms merely utilize their physiological and behavioral programs as organizational anchors, machines have no operational extent or existence beyond their programming. Their maintenance and directionality must constantly be propped up by human conscious efforts -- they lack their own inherent consciousness. Our machines do not thermodynamically push themselves "up hill"; they must continually -be pushed-. This is true even when they are provided with a continual source of energy.
If we viewed computers as organisms, we'd say they've managed to massively increase their numbers over the last 60 years by exhibiting particular behaviors, due to a symbiotic relationship with humans. If you consider symbiotic relationships cheating, consider that most biological life-forms would die without other types around.

The point is that our machines are merely extensions of our own biology; they have no life of their own yet. My proposal is that when we figure out how to create synthetic life we'd have made a necessary [possibly sufficient] step in the direction of creating synthetic consciousness.

I really don't see how you've decided that cells thermodynamically push themselves: what happens when you block sunlight from the Earth?

The organisms that manage to survive will be the ones that adapt to find alternate energy sources. For as long as an organism is alive and able it will seek out and utilize available energy to maintain itself [ETA: unless it is suicidal, in which case it will seek out the means to end its life]. Inanimate systems do not do this without being manipulated by a conscious agent. This holds true regardless of the amount of available energy in the surrounding environment.

As I've already mentioned, our machines cannot replicate the self-maintenance capabilities of organisms. They rely entirely on human consciousness to create and maintain them. It is this innate capacity that is the necessary -- possibly sufficient -- objectively discernible requisite of a conscious system.
 
Last edited:
As I understand it, naivety in the observer is not relevant - a p-zombie simply responds as if it is conscious and aware, has qualia, etc., although, by definition, it doesn't. The point is to explore the practical differences between p-zombies and humans in respect of these concepts.

The basis of my argument necessarily implies that p-zombies are impossible in practice. The closest thing to a p-zombie that can be created IRL would be a lifelike puppet under the control of some conscious agent(s), or an artificial construct sufficient to fool a naive individual.

dlorde said:
Of course, they're subject to thermodynamic laws -- thats not the point. What I'm saying is that living systems -autonomously push against- thermodynamic equilibrium in a self-sustaining manner rather than run down as a desktop or robot would. This thermodynamic bootstrapping is something living things perform at every level of their organization. Its this property that I'm saying is a necessary requisite for a conciousness compatible system.
But why is it a necessary requisite? We have designed systems that can seek out energy sources and 'recharge' themselves.

Key phrase being "we have designed"...

dlorde said:
We have not yet designed systems that can fully repair themselves, but this is just an engineering problem that hasn't been addressed because, in practice, it is unnecessary (and would be expensive). I don't see the necessary connection between consciousness and being physically self-sustaining. It just seems like defining consciousness as exclusive to biological life. A person can be conscious and aware while their body is sustained by artificial means.

...And without those -artificially- constructed means that individual will not survive. The exceptions you're trying to provide simply prove the original point ;)

Your body depends upon the imposition of your consciousness to maintain it. Once that consciousness is entirely gone its called a rotting corpse and it runs down thermodynamically like any other lump of organic materials.

:confused: That is simply not the case - in coma or PVS, the individual may not be conscious or aware, yet can live on for years in that condition. We all eventually 'run down' and die, but that has nothing per-se to do with consciousness.

How long would those individuals survive in that condition without help from individuals who are aware and cognizant of their situation? And would you not agree that the body of that individual must be -alive- and metabolizing to even be able to support their consciousness should they ever come out their coma?
 
... What one should look for is the ability to self-generate novel behavior and sense making, without reliance on external prompting or explicit programming.
Are you talking about creativity?

Yep.

... All organisms contain programed feedback mechanisms, to be sure, but they also show a degree of proactive behavior not explicitly programmed into them -- some to greater extents than others. Its this proactivity that sets living/conscious systems apart from present day technological systems.
Would you include viruses? bacteria?

Viruses? Probably not. Bacteria? Maybe so.
 
What worldview is that, and in what way and why wouldn't it work?

We know that psi doesn't exist; that's well-established, so I'm not sure why you're bringing this up.

In the same way that your 'nonsensical' and 'incoherent' emotions don't exist? :p
 
Yep.

Viruses? Probably not. Bacteria? Maybe so.

Deciding what is alive and what is not is a tricky question when you start looking closely at the borderline. Could you articulate why you consider bacteria probably alive and viruses probably not? I'd be interested to hear what details you consider important in making a decision.

I consider bacteria definitely alive. Viruses? I think there are good arguments for both ways. I do think that computer viruses have as legitimate a claim for being alive as some biological viruses, but I'm not sure whether or not viruses should be classified as living organisms.
 
Deciding what is alive and what is not is a tricky question when you start looking closely at the borderline. Could you articulate why you consider bacteria probably alive and viruses probably not? I'd be interested to hear what details you consider important in making a decision.

I consider bacteria definitely alive. Viruses? I think there are good arguments for both ways. I do think that computer viruses have as legitimate a claim for being alive as some biological viruses, but I'm not sure whether or not viruses should be classified as living organisms.

I'm pretty much in the same boat with you on this one. I wouldn't consider viruses to be alive for reasons similar to why I do not consider our machines to be alive. They are not autonomous, and they require the activity of self-sustaining organisms to persist.

Viruses behave more like complex signals than actual organisms. The interesting thing is that many of them are translated by cells to mean "keep repeating this even if it kills you..."
 
... You feel that the 'illusion of consciousness' is the illusion that we can control ourselves via our conscious thoughts. You don't feel that is actually what happens. Have I understood you correctly?
Pretty much, yes - although 'conscious thoughts' is not how I'd put it. Our conscious awareness is not as volitional as it seems, it is more a reflective extension or enhancement of the continuous sense of identity provided by long-term memory.

Why do you think that consciousness is not the ‘executive control system’?
As Pixy suggests, there is evidence that points that way Libet's experiments (I've seen other supporting evidence - don't have links at present).

As a thought experiment, I'd consider going further than the hypothesis of the 'conscious veto' and take the step of (conceptually) separating the process of conscious awareness from all volitional influence, either excitatory or suppressive. Bear with me here - so in Libet's example of vetoing a subconscious urge, we might consider the subconscious processes building to volitional action based on the strong weighting of some initial evaluation, while a more nuanced evaluation (i.e. modelling the potential outcomes of this action) continues 'in background'. Consciousness is notified of the impending volitional action, and shortly after, the flood of new activity as the background evaluation produces a strongly weighted negative (conflicting) outcome which triggers suppression of the impending action. The retrospective narrative would be along the lines of "I was about to do X, when I suddenly realised it would cause Y, and I stopped myself just in time".

What purpose do you think consciousness serves if it is not making decisions (what I think of as the function of an executive control system)? What do you think the purpose of the causal narrative of self is?

This is the question. Clearly, without some kind of feedback from this skeletal conception of consciousness to the subconscious levels, it will have no influence on behaviour, so will have no selective advantage. There being few unnecessary expenses in nature, it seems reasonable to suppose that there is some significant feedback/additional functionality. It does seem unlikely to be just a side-effect or by-product of behavioural modelling and prediction.

So we should consider what significant advantages conscious awareness and sense of self provides. Is it 'just' a sophisticated filter for volitional action? The evolution of consciousness seems to be associated with the development of sophisticated socio-cultural interactions, so is there a particular advantage in a more explicit sense of self than just 'one of the group'?

Frankly, I haven't yet speculated much on the particular advantages conscious awareness and a sense of self may provide, but I do think the minimal functional approach outlined above, working backwards from a skeletal conception of conscious awareness, and adding evidenced functionality, is more interesting, fits current evidence better, and potentially has more promise of specific answers than most of the other vague ideas and proposals I've heard.

Well, we can always compare the behavior of conscious individuals with unconscious individuals. I think that gives us some indication of the influence of consciousness.
I think Pixy covered this.
 
Psi doesn't exist. Not a problem, merely an observation.

What need is there of observation if what you already "know" encompasses what is unknown? I suppose scientists can retire now and simply ask the omnicient PixyBot whether or not their hypotheses are true ;)
 
Is it time yet to replace the word consciousness with the phrase 'evolving playground of competing behavioral impulses monitored against contextual appropriateness'?

I would question whether that necessarily requires consciousness. Though perhaps it can play a role in determining the contextual appropriateness...
 
From what I've gathered, the p-zombie thought experiment is supposed to demonstrate that one cannot discern subjective experiences from outward physical behavior. But, like I've already mentioned, I do not think such entities are possible in practice and I don't find the thought experiment particularly useful. Whenever p-zombies crop up in these discussion its always someone other than myself that brings them up.
No one considers them possible in practice, but do you consider them possible in theory? The behaviorally-identical type missing qualia, that is.

Didn't say they lack consciousness. I said that their consciousness is "dim". The range, intensity, quality and depth of their experiences (emotional, and in some cases sensory) tend to be much lower than that of other human beings. In some instances their level of emotional arousal is so low that they are constantly fighting boredom; and the way the majority of them alleviate this boredom is at the expense of others. In other words, they tend to be psychological parasites and predators.
If the level of emotional arousal decides the "brightness" of consciousness, wouldn't you have to rate a rhino as brighter than you?

Reminds me of how when a character disordered 'friend' of mine attempts to explain things outside of it's emotional depth and all they succeed in doing spewing nonsensical gobbledygook. For the most part, it can act well enough to fool trained professionals, provided the level of personal interaction is relatively superficial and brief. It has developed a number of tricks over the years to simulate human emotion and is very gifted at picking up languages -- which is an added asset. But, by and large, it has little to no depth of understanding concerning things outside of it's narrow subjective range.
Are you talking about a person with a personality disorder? I don't know why such a disorder would rate someone as having a lower level of consciousness. I would assume their "fooling" would amount to hiding their abnormal behaviors. How about if we refer this measure of normality as "AMM-consciousness" to reduce confusion.

So, like I was saying, if an entity in question has a very limited subjective existence they cannot -generate- coherent and meaningful statements about things beyond it because they don't experience them. They can only attempt to emulate based off of external behaviors and cues. If they have no subjective life at ... Well, I think you get the picture.
Nope, sorry, I don't get that picture.

What is a "limited subjective existence"? Can they still report what they have experienced at any past period, as well as you could? Are they not the subject, the recipient, of that experience?

The differences you describe would likely be due to differences in the behavioral rules that use that experience.

I'm saying that the subjectivity of consciousness is a crucial component in human and animal problem solving abilities. Heck -- to even speak of there being a 'problem' outside the context of -subjective- valuation is absurd.
I don't see a difficulty in subjectivity itself nor in solving certain types of problems. Just the generalization of the process.

The point is that our machines are merely extensions of our own biology; they have no life of their own yet. My proposal is that when we figure out to to create synthetic life we'd have made a necessary [possibly sufficient] step in the direction of creating synthetic consciousness.

The organisms that manage to survive will be the ones that adapt to find alternate energy sources. For as long as an organism is alive it will seek out and utilize available energy to maintain itself. Inanimate systems do not do this, regardless of the amount of available energy in the surrounding environment.

As I've already mentioned, our machines cannot replicate the self-maintenance capabilities of organisms. They rely entirely on human consciousness to create and maintain them. It is this innate capacity that is the necessary -- possibly sufficient -- objectively discernible requisite of a conscious system.
But machines *can* replicate-- they use us as a host, like a virus does. Think about it: their designs, like our DNA, are passed from computer to computer. Computers compile software and IC designs in creative ways (humans no longer bother to check the details of their creations). And their method of maintenance is often to just make large numbers, as viruses do.

Scary, isn't it?
 
I'm pretty much in the same boat with you on this one. I wouldn't consider viruses to be alive for reasons similar to why I do not consider our machines to be alive. They are not autonomous, and they require the activity of self-sustaining organisms to persist.

Viruses behave more like complex signals than actual organisms. The interesting thing is that many of them are translated by cells to mean "keep repeating this even if it kills you..."

Idealists likely assume everything is in some sense 'alive' including quarks/bosons or strings or 'space-time' or whatever actually "is".
 
Last edited:
The basis of my argument necessarily implies that p-zombies are impossible in practice. The closest thing to a p-zombie that can be created IRL would be a lifelike puppet under the control of some conscious agent(s), or an artificial construct sufficient to fool a naive individual.
Obviously...that is why it is a thought experiment.

Key phrase being "we have designed"...
Why is that the key phrase - or relevant at all? Does it matter how it came about? You suggested that if/when we create synthetic life, we then have a chance of creating an artificial consciousness - but we would have designed that too, so what are you saying?

...And without those -artificially- constructed means that individual will not survive. The exceptions you're trying to provide simply prove the original point ;)
What has that to do with anything? you said that autonomous self-sustenance was a requirement, and I provided an example. Why does it make any difference whether it evolved or was designed?

You appear to be making the argument that only something living can be conscious because consciousness requires autonomous self-sustaining capabilities. However, this is pure assertion - you haven't explained or made a case why consciousness requires this, nor why a man-made autonomous self-sustaining device doesn't count.

How long would those individuals survive in that condition without help from individuals who are aware and cognizant of their situation?
They would die of dehydration in a matter of days - so?

And would you not agree that the body of that individual must be -alive- and metabolizing to even be able to support their consciousness should they ever come out their coma?
Certainly, consciousness obviously requires a functioning support system, nobody's arguing otherwise. Processing of any sort is work and work requires energy - which in living organisms is supplied via metabolism. In other systems, it can be supplied in other ways... so what?
 
From what I've gathered, the p-zombie thought experiment is supposed to demonstrate that one cannot discern subjective experiences from outward physical behavior. But, like I've already mentioned, I do not think such entities are possible in practice and I don't find the thought experiment particularly useful. Whenever p-zombies crop up in these discussion its always someone other than myself that brings them up.
No one considers them possible in practice, but do you consider them possible in theory? The behaviorally-identical type missing qualia, that is.

What exists 'in theory' depends on the theory. In the one I'm going with the answer would be no.

Didn't say they lack consciousness. I said that their consciousness is "dim". The range, intensity, quality and depth of their experiences (emotional, and in some cases sensory) tend to be much lower than that of other human beings. In some instances their level of emotional arousal is so low that they are constantly fighting boredom; and the way the majority of them alleviate this boredom is at the expense of others. In other words, they tend to be psychological parasites and predators.
If the level of emotional arousal decides the "brightness" of consciousness, wouldn't you have to rate a rhino as brighter than you?

Hehe. Quite possibly :p

But in all seriousness, I'm not counting intellectual capacity in evaluating how "bright" or "dim" a conscious entity is. There are small children that have a 'brighter' consciousness than many PhDs.


Reminds me of how when a character disordered 'friend' of mine attempts to explain things outside of it's emotional depth and all they succeed in doing spewing nonsensical gobbledygook. For the most part, it can act well enough to fool trained professionals, provided the level of personal interaction is relatively superficial and brief. It has developed a number of tricks over the years to simulate human emotion and is very gifted at picking up languages -- which is an added asset. But, by and large, it has little to no depth of understanding concerning things outside of it's narrow subjective range.
Are you talking about a person with a personality disorder? I don't know why such a disorder would rate someone as having a lower level of consciousness. I would assume their "fooling" would amount to hiding their abnormal behaviors. How about if we refer this measure of normality as "AMM-consciousness" to reduce confusion.

If you want to, I suppose. Just seems to me that if an individual has such shallow affect that the only things that can motivate them are fear, greed, and the desperate need to alleviate an all-consuming stagnating boredom -- with absolutely no regard for the wellbeing of themselves or others -- their consciousness is of a very low quality.


So, like I was saying, if an entity in question has a very limited subjective existence they cannot -generate- coherent and meaningful statements about things beyond it because they don't experience them. They can only attempt to emulate based off of external behaviors and cues. If they have no subjective life at ... Well, I think you get the picture.
Nope, sorry, I don't get that picture.

What is a "limited subjective existence"? Can they still report what they have experienced at any past period, as well as you could? Are they not the subject, the recipient, of that experience?

Oh, they experience alright. However, the scope of their experience is much narrower, its depth much shallower, and the quality much poorer than the average person. When their 'masks' are down the eyes of these creatures are so vacant, its disturbing -- they resemble zombies.

In any case, the ability and an individual to relate and express their subjective states is limited by what they are able to experience. If the entity in question has no subjective experience at all, they cannot -generate- any coherent statements in reference to them because they simply aren't there.

The differences you describe would likely be due to differences in the behavioral rules that use that experience.

No, they literally don't experience the world in the same way you or I do. 'Behavioral rules' ain't got nuthin' to do with it.

I'm saying that the subjectivity of consciousness is a crucial component in human and animal problem solving abilities. Heck -- to even speak of there being a 'problem' outside the context of -subjective- valuation is absurd.
I don't see a difficulty in subjectivity itself nor in solving certain types of problems. Just the generalization of the process.

The problem of subjectivity is that we do not understand it well enough to instantiate it artificially, or understand what it is in relation to the physical systems they are correlated with.

The point is that our machines are merely extensions of our own biology; they have no life of their own yet. My proposal is that when we figure out to to create synthetic life we'd have made a necessary [possibly sufficient] step in the direction of creating synthetic consciousness.

The organisms that manage to survive will be the ones that adapt to find alternate energy sources. For as long as an organism is alive it will seek out and utilize available energy to maintain itself. Inanimate systems do not do this, regardless of the amount of available energy in the surrounding environment.

As I've already mentioned, our machines cannot replicate the self-maintenance capabilities of organisms. They rely entirely on human consciousness to create and maintain them. It is this innate capacity that is the necessary -- possibly sufficient -- objectively discernible requisite of a conscious system.
But machines *can* replicate-- they use us as a host, like a virus does. Think about it: their designs, like our DNA, are passed from computer to computer. Computers compile software and IC designs in creative ways (humans no longer bother to check the details of their creations). And their method of maintenance is often to just make large numbers, as viruses do.

Scary, isn't it?

Not, really. Our machines play a completely passive role throughout the entire process of replication. As of now, they do not replicate; they are replicated.
 
Idealists likely assume everything is in some sense 'alive' including quarks/bosons or strings or 'space-time' or whatever actually "is".

Would they say that all things are as equally 'alive' as everything else? :-o
 
Originally Posted by dlorde:
Are you talking about creativity?
Yep.
How would you feel about the several varieties of computational creativity that have been explored? Can you conceive of a computational system that can combine such approaches to produce a level of creativity comparable with an invertebrate? or small mammal, e.g. a mouse? or a larger mammal? where would you draw the line?

Viruses? Probably not. Bacteria? Maybe so.
What kind of pro-active behaviour do bacteria show, that is not 'programmed into' them? Do you feel we could not code a system that would show all the various behaviours of a bacterium (including 'pro-active') - without explicitly programming those behaviours?
 
Status
Not open for further replies.

Back
Top Bottom