• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Are You Conscious?

Are you concious?

  • Of course, what a stupid question

    Votes: 89 61.8%
  • Maybe

    Votes: 40 27.8%
  • No

    Votes: 15 10.4%

  • Total voters
    144
I don't see a distinction between the head+tape of an abstract Turing machine and the registers+memory of a real computer.

If data is on the tape, it can be part of the algorithm being implemented by the machine. If not, not. If data is in memory, it can be part of the algorithm being implemented by the CPU. If not, not.

What you are doing here is claiming that the abstract Turing machine is somehow equivalent to the entire computer, rather than just the CPU, while at the same time asserting that the keypresses aren't part of the algorithm run on the Turing machine.

That is just nonsense -- if the TM is equivalent to the entire computer, then the keypresses are steps in the algorithm rather than data and this is an order dependence problem. If the TM is only equivalent to the CPU, then your claim about missing/erroneous data is also wrong because the CPU doesn't have any concept of such a thing any more than a TM does.

So which is it? In which way do you want to be wrong this time?

Bumping this because westprog hasn't responded.
 
I think, by definition, if you built an actual physical Turing Machine where the specs meet the definition you would have a real computer.

Well, that isn't the issue.

In case you weren't following, the issue is "time dependency" of real systems.

Westprog claims that "time dependency" is something that an abstract algorithm, running on an abstract turing machine, cannot account for. He says this because such abstract systems aren't constrained in any way by time.

But I am trying to tell him that time, in reality, is merely "the rate at which things happen" and mathematically this means his "time dependence" reduces to order dependence -- which, not surprisingly, is a feature of all algorithms.

Enter the post I responded to above -- he claims that there is no possible algorithm that can simulate a lost keypress due to a hardware timing error, because the keypress is just "lost" and such a thing isn't accounted for in the Turing world. So I am pointing out that if the keypress is considered part of the algorithm, then the step order changes when the keypress is lost -- something we can do in the Turing world -- and if the keypress isn't considered part of the algorithm (instead, is data) then it isn't any different from the point of view of the CPU than it would be from that of a Turing machine -- implicitly, something we can also do in the Turing world.

Either way, you can do such a thing in the Turing world.
 
Last edited:
The reason why I very strongly insist that subjective "private" experiences are a matter of physics is because our experiences are highly specific physiological responses to specific kinds of physical stimuli.

OK, then, point to whatever specific physical thing encodes an experience and explain why only that physical thing can encode that experience.

If my central point is that we are lacking the knowledge of the "specific physical thing" that is a sufficient indicator of consciousness why in blue barfing blazes would I then turn around and claim knowledge of what it is?

Information in the abstract does not trigger the sensation of pain or the perception of red; specific chemical signals in the body are required to trigger these experiences.

The specific chemical signals in our body are an artifact of our evolutionary history. As the Wasp rightly pointed out, there is no reason to believe that the specific chemical signals (I will assume you are talking about neurotransmitters here) we use are required for the job -- in theory we could swap them out for an entirely different set of neurotransmitters and receptors and get functionally identical results.

The substitutes must be able to atleast chemically mimic the "natural" signal molecules in order to produce similar effects.

As a thought experiment, would you agree that we could (in theory) replace the neurons in the brain one at a time with nanomachines that are functionally identical at a neuron-by-neuron basis? Why or why not?

Whatever devices are used they must inhere the relevant physical properties that allow our biological neurons to produce consciousness. Being as how we do not know the physical "whats" and "hows" of consciousness we have no way of knowing what hardware systems would be sufficient beyond our own. Its common flipping sense, dude.

If a person is exposed to psychoactive substances their change in mental states is due to the nervous system's physical reaction to the reagent and not a magical emergent property of algorithmic code execution.

I am personally subjectively familiar with the process.

We already know that neural networks do not process information the same way that our usual Von Neumann/Harvard architecture computers do -- no surprise there. That does not stop us (in theory) from emulating neural networks using algorithmic processes to whatever degree of detail needed. In practice of course, there are huge engineering challenges, but they are just that -- engineering challenges.

It's entirely unjustified to assume that merely emulating the computational functions of our neurons is sufficient to produce consciousness -- especially when we have not yet discovered what consciousness is or how it is produced in the first place. Even assuming that we actual do learn what physically constitutes consciousness, simulating it would not reproduce it anymore than a computer simulation of gravity produces actual gravitational effects. You CANNOT engineer a feature into a system without having a rudimentary understanding of it or, at the very least, have the ability to physically identify it. For the life of me, I don't get why you are so resistant to facing this fact.

We are also pretty sure that psychoactive substance work because they (or their metabolites) either impersonate neurotransmitters, or they mess with the release/reuptake systems in synapses for particular neurotransmitters, thereby messing with the usual synaptic weighting (and, therefore, firing rate of the target neuron) for the sites the drugs affect. If that network happens to participate in a process involved with consciousness, then consciousness will be affected or altered, but as a result of the drug messing with the synaptic weights or firing rates of the neurons involved, not because the drug inherently possesses some magical consciousness altering property.

Right. So basically you're saying that the chemical properties of neurotransmitters, and the physical conditions of biological brains have absolutely no relevance to the production of sensations, emotions, or other subjective states. You can't even address the basics of what qualia are, or how they are produced, yet you insist that creating them is a simple matter of "engineering". Just who do you think you're kidding?

The different chemical compositions of the substances that bind to cellular receptors determine the type of subjective response, if any, that a given conscious person will experience.

First, the specific substances (I will assume you are talking about neurotransmitters and substances that impersonate them here) and their receptors are artifacts of our evolutionary history -- there is no reason to assume that the specific ones we use are the only ones possible.

I never said they are the only possible ones.

Second, I see no reason to expect that we will find consciousness at the level of synaptic activity and neural excitation levels, especially since pretty much the same neurochemistry is used by everything that has a nervous system, whether or not we recognize them as conscious.

The point is to find out WHAT consciousness is and HOW that "synaptic activity" produces it in the first place. Despite all your handwaving, you clearly do not know the answers to any of these questions yet you act as if my pointing out this ignorance is itself a radical unjustified claim. Get real, man.

Somehow, properties of the signal molecules are conveyed by the EM signals along the cell membrane which, in turn, appear to have direct affect on the subject's conscious experiences.

No, the neurotransmitters do not have any special properties beyond the fact that they bind to neurotransmitter receptors. Even then, we have no reason to directly equate neural polarization levels and excitation thresholds to consciousness and subjective experience, and even if we did there is no reason that we could not in theory simulate those features in an artificial neural network.

Oh my god! Could you -please- spare me the constant appeals to "but we can simulate it!". First of all, we do not know what consciousness is to begin with, so claims of knowledge of how to simulate it are complete ****ing rubbish. Second of all, even if we did have the knowledge required to design such a simulation, simulation itself is, in principle, NOT a reproduction.

There is no way a computer scientist can properly address the questions I raised in terms of I/O switching functions because they are inherently biophysics questions.

I have good reasons to doubt that, as outlined above.

Your entire position basically boils down to: "Brains compute, therefore consciousness is a prior computation. All that is needed to produce consciousness is to emulate the computations of the brain and call it a day."


Our consciousness and conscious experiences are undeniably a result of the physical conditions and biological mechanisms of our brains

Indeed.

Yet, in the same breath, you'll handwave away the significance of those conditions or even the need to understand how they relate to consciousness.

At some point we're going to have to deal with the actual physics of what the brain is doing instead of arrogantly -- lazily -- chalking it up to "computation/information processing/SRIP/etc." because the prospect of unknown science makes our brains hurt.

Well, we have no indications that there is an "unknown science" at play, at least not at the level of physics or chemistry. I understand you believe differently, but it is just a belief right now.

Stop lying to yourself. Its a flat fact that we do not know what consciousness is or understand how the chemistry/physics of the brain produces subjective experience. Your claim that computation is a sufficient explanation of consciousness is not only a -belief-, its a completely unjustified one at that.

Do you -honestly- believe that any substrate of any composition will have subjective experience merely because its implementing a particular switching pattern?

As long as it meets the requirements I outlined, then yes.

How can you maintain this -belief- when you can't even answer the most rudimentary questions about subjective experience? Earlier you claimed I set the bar too high. The problem is that your preferred conception of consciousness is completely and thoroughly inadequate as an explaination, and you know it. Cut the bull.

But you could determine this if you knew it were implementing a particular line of code?

A particular line of code in isolation? No, not any more than you could determine that a system is conscious by looking at a single neuron.

So, by your criteria, how would one go about discerning if a nematode has subjective experiences and if so, what the range of its experiences are, what it will experience given a particular stimulus, and what its experiencing during a given period of time? If you cannot answer these questions you do not have a sufficient theory of consciousness, and all your handwaving bluster about computational criteria amounts to nothing more than a pile of empty platitudes.

If consciousness itself could be reduced to something apart from all of those cognitive capacities what makes you think that its simply a matter of computational coding?

What makes you think it is not? Nevermind, you think it is some special property of our neurochemistry.

Yes, its such an earth shatteringly radical concept that the biophysics of the brain is relevant to consciousness. What will I think of next? :rolleyes:

So all substances are conscious and it's simply a matter of waking them up with the correct algorithmic procedure?

No, I think that systems that meet the criteria I outlined are conscious, no matter what they are built out of.

Yet you can't tell me the first thing about what those allegedly conscious systems are experiencing or how your "criteria" even relate to those experiences. Get real.

We just established that linguistic and/or motile behavior may not be possible for a some conscious entities. Absent such a behavior test, how else could one discern whether or not they're conscious?

In general, we cannot. We could probably build a test for consciousness that is not behavioral if we know the details of how consciousness is implemented in entities of whatever type, but such tests would not be general.

In other words, when you were claiming that you knew the sufficient criteria for discerning consciousness you were just talking outta your behind.

As far as the subject is concerned, their sensory experience is very much like an output; some think of it as something akin to a fully immersive theatrical experience. Sensory stimuli that do not make it onto this stage of conscious experience are what we call subliminal. In any case, if we identify consciousness we'd have identified the experiencer.

The Cartesian Theater called, they want their homunculus back.

Have a match with that straw?

The problem is that you're putting the cart way before the horse.

You are insisting that our designs for a cart replacement are ridiculous and can never work because we swapped the horse for an internal combustion engine.

You've gotta be kidding me. You have a theory of consciousness that explains nothing, criteria for discerning consciousness that can't even tell us if a nematode has subjective experiences and you seriously consider your "model" to be the epistemic equivalent of an internal combustion engine? Are you completely daft?

We have not yet identified what physically constitutes our consciousness and computational descriptions are not a sufficient substitute.

Yes, and?

Translation: "Sure, we have no idea what consciousness is. Big deal. Whats your point?"


At this point, computation has just become a "god of the gaps" explanation. Computationalism isn't science, its a placative ideology serving to distract AI researchers from the fact that they really have no idea what consciousness is.

That is your opinion, certainly.

If you're any reflection of the average computationalists its a stark fact.
 
Last edited:
Well, that isn't the issue.

In case you weren't following, the issue is "time dependency" of real systems.

Westprog claims that "time dependency" is something that an abstract algorithm, running on an abstract turing machine, cannot account for. He says this because such abstract systems aren't constrained in any way by time.

But I am trying to tell him that time, in reality, is merely "the rate at which things happen" and mathematically this means his "time dependence" reduces to order dependence -- which, not surprisingly, is a feature of all algorithms.

Enter the post I responded to above -- he claims that there is no possible algorithm that can simulate a lost keypress due to a hardware timing error, because the keypress is just "lost" and such a thing isn't accounted for in the Turing world. So I am pointing out that if the keypress is considered part of the algorithm, then the step order changes when the keypress is lost -- something we can do in the Turing world -- and if the keypress isn't considered part of the algorithm (instead, is data) then it isn't any different from the point of view of the CPU than it would be from that of a Turing machine -- implicitly, something we can also do in the Turing world.

Either way, you can do such a thing in the Turing world.

And it's that word "simulate" which is the problem. I quite clearly said that a Turing machine could probably simulate time-dependent operations. What it can't do is emulate time-dependent operations. You cannot replace a time-dependent machine with a Turing machine. It's precisely in its interaction with the outside world where the Turing machine fails - because the Turing machine doesn't operate in that context.
 
The reason why I very strongly insist that subjective "private" experiences are a matter of physics is because our experiences are highly specific physiological responses to specific kinds of physical stimuli. Information in the abstract does not trigger the sensation of pain or the perception of red
Category error.
 
Just a personal note….I’ve been following this ‘consciousness’ thread for a while. I’ve learned a lot, so thanks for the efforts (hope you guys get something out of it too). It’s like a crash course in computational theory, the philosophy of science, the philosophy of philosophy, neuro-chemistry and biology, semantics, the sociology and psychology of internet interaction…and I could go on.

Personally I’d say that Aku is substantially more conversant with the issues than Nescafe (not that I could claim to be reliable arbiter of the issue). From what I can see, Nescafe doesn’t seem to think that it is necessary to actually be conversant with the issues in order to instantiate consciousness (maybe ‘something’ can be created computationally [as you say Nescafe, we can give it a try and see if it says ‘hi’]…what, exactly, it would be is another matter entirely). Aku responds, quite reasonably, by asking how the hell you can instantiate anything if you havn’t a clue what it is you are instantiating (and goes to quite some lengths to demonstrate the flaws in Nescafe’s positions)? Good question, so I’ll ask it.

It may seem somewhat….simplistic….in comparison to so much of the debate raging here so I’ll describe the question first.

…as Aku says, there’s a lot we don’t know about consciousness. I’ll just simplify that to we don’t know what it is. Even I could become ever more explicit but eventually I would have to conclude that I don’t know what it is that I’m trying to be explicit about….which is the point, why not?

I think I mentioned this before but this question of ‘consciousness’ is singularly unique in all of science. Partly because of the quantitative and qualitative dimensions of the subject (consciousness….even Dawkins admitted that consciousness is a special case), but also because we’re not studying something ‘outside’ of us, we are studying us. Our own experience of our ability to examine our own experience (for one thing)….so to speak.

….so, to what degree does our ‘success’ in illuminating this issue implicate a dysfunctional condition? In other words….do we not ‘understand’ it because we don’t know what we’re talking about, or do we not ‘understand’ it because we don’t know how to know what we’re talking about?

I’m not talking about how to instantiate consciousness, though that issue is related. I’m talking about our ability to recognize what we are. What is consciousness? It is our ability to ask the question, is it not…..and it is also our ability to answer the question.

An example is always useful. A seven year old wrote the following statement: “destruction is finding being in matter”. Now, regardless of your interpretation of or agreement with the meaning of that phrase, it obviously describes an extremely sophisticated perspective, and implies a substantial insight.

To what degree is ‘insight’ relevant in answering the question ‘what is consciousness’…and if it is relevant…how do you introduce ‘insight’ into what is essentially an empirical equation?

Just one final question. If you had to quantify the dimensions of what a human being is (metaphorically I suppose…a billion computations a moment, a brazilian cells all talking to each other at once, a biological entity of unrivaled realities, the most amazing machine ever created…by whatever it is that could create something that wonders if there are things that can create amazing machines, a metaphorical mystery of mountainous meaning, neurologically mind-numbing….that kind of thing)….how might you do it?

….and a related question….how might you characterize ‘our’ understanding of our typical daily lives (as in….I brushed my teeth, I had sex, I argued with some dude who names himself after a jar of coffee about what it is that is actually arguing with him…typically prosaic but occasionally life brings you to your knees kind of thing).

I apologize if my semantics is (are) a bit unclear. Answer if you feel so inclined.
 
And it's that word "simulate" which is the problem. I quite clearly said that a Turing machine could probably simulate time-dependent operations. What it can't do is emulate time-dependent operations. You cannot replace a time-dependent machine with a Turing machine. It's precisely in its interaction with the outside world where the Turing machine fails - because the Turing machine doesn't operate in that context.


But what possible difference would that make? Turing machines are an abstraction. The reason they work in a time independent manner is because they are an abstraction. They deal with computation as an abstraction. The importance of dealing with this as an abstraction is to isolate things to determine if they are computable in an abstract sense. If they are, then it should be theoretically possible to build something that does the same thing in the real world.

Now the real world model cannot, by its very nature, work in abstract terms. So there is a problem to be solved -- how to deal with the issue of time dependence, which we seem to be able to deal with in abstract terms by simply converting the issue into one of algorithmic order, in a living brain or in a computer. We know why it is an issue in brains. There should be no problem recreating that in a computer.

Or are there people who are arguing that Turing machines themselves are conscious; alternatively is that what you think they are arguing? I don't think anyone thinks that a Turing machine is conscious. There would be no possible way to know, because they don't exist. The issue with thinking about Turing machines is to see that the problem can be dealt with abstractly, not that the abstraction has the same properties as the real world model of computation; and if it can be dealt with abstractly, then we should be able to create the same thing in a different medium, as long as we take all the computational issues into account (and time dependence is one of them).

Recall that time dependence is an issue simply because of the way that neurons work. We could theoretically do the same thing with silicon chips -- passing electricity through networks of nodes where time dependence is a critical issue. With brains we'r discussing a very specific type of computation.
 
And it's that word "simulate" which is the problem. I quite clearly said that a Turing machine could probably simulate time-dependent operations. What it can't do is emulate time-dependent operations. You cannot replace a time-dependent machine with a Turing machine. It's precisely in its interaction with the outside world where the Turing machine fails - because the Turing machine doesn't operate in that context.

So you are claiming that in any real world operation, there is "something" more fundamental than the interactions between particles, and that "something" can't be accounted for on a Turing machine.

Because otherwise, if you break a real world operation down into the smallest steps possible, you end up with an ordered sequence of events -- the period between each event (what you call time) being irrelevant when it comes to the behavior of the operation as long as the order is preserved.

Of course, two things strike anyone reading this right off the bat.

First, there is no evidence for such a fundamental "something" beyond particle interactions.

Second, the idea that there is such a "something" has a name -- dualism. Not that anyone has ever accused you of being a dualist before ...
 
Last edited:
Now the real world model cannot, by its very nature, work in abstract terms. So there is a problem to be solved -- how to deal with the issue of time dependence, which we seem to be able to deal with in abstract terms by simply converting the issue into one of algorithmic order, in a living brain or in a computer. We know why it is an issue in brains. There should be no problem recreating that in a computer.

Westprog's claim is that a change in algorithmic order isn't necessarily the reason the behavior of a system might change if the timings change.
 
We are trying to look at ‘thinking/subjectivity’ and recognize what it is….well…thinking. Have the ‘thought’ describe itself….and not just so-to-speak. Quite specifically. It is the ‘thought’ that we are asking ‘what are you’ and it is the exact same thought that is asking itself ‘what are you’ (….I guess we can see where the QED is implicated…even just metaphorically). It’s not just that we don’t have a scientific vocabulary for this variety of reality, this variety of reality has to describe it’s own vocabulary or it won’t exist. It has a different variety of reality and the reason we do not know it is because we do not know how to be described by it….not because we do not know how to describe it. Dualism if you want. Religion if you want. Something if you want. Whatever…..a very very substantial and fundamental focal point exists….and it seems to exist at the convergence of the scientific POV and the scientist.

.....as to why I post metaphorical nonsense...Mother Theresa of course.
 
But what possible difference would that make? Turing machines are an abstraction. The reason they work in a time independent manner is because they are an abstraction. They deal with computation as an abstraction.

No, that's not the case. When engineers design real-time control systems, they do so to an abstract model. That's how all engineers and scientists work. It's just that their model includes a time concept, and the Turing machine model does not.

The importance of dealing with this as an abstraction is to isolate things to determine if they are computable in an abstract sense. If they are, then it should be theoretically possible to build something that does the same thing in the real world.

Now the real world model cannot, by its very nature, work in abstract terms. So there is a problem to be solved -- how to deal with the issue of time dependence, which we seem to be able to deal with in abstract terms by simply converting the issue into one of algorithmic order, in a living brain or in a computer. We know why it is an issue in brains. There should be no problem recreating that in a computer.

Or we can apply an abstract concept of time, and attempt to model what the brain is actually doing, rather than trying to shoehorn its functionality into the wrong model. The only reason for trying to model what the brain does in terms of a Turing machine is because Turing machines are how we think about computability. The concept of Turing machines wasn't obtained by looking at what brains do, because brains work in a very different way.

Or are there people who are arguing that Turing machines themselves are conscious; alternatively is that what you think they are arguing? I don't think anyone thinks that a Turing machine is conscious. There would be no possible way to know, because they don't exist. The issue with thinking about Turing machines is to see that the problem can be dealt with abstractly, not that the abstraction has the same properties as the real world model of computation; and if it can be dealt with abstractly, then we should be able to create the same thing in a different medium, as long as we take all the computational issues into account (and time dependence is one of them).

If time dependence is an issue, then we need a model that takes it into account. There's no particular problem with this. Many physical equations include a little "t" in them. That's modelling time. Why is this not possible?

Recall that time dependence is an issue simply because of the way that neurons work.

No, it's because of the fundamental function of the brain. The brain and nervous system controls the body in real time. It has to respond in a given time or else we would stop functioning and die. Considering the brain as something performing an algorithm which will eventually produce a correct result is entirely inapplicable. It has quite a different function.

We could theoretically do the same thing with silicon chips -- passing electricity through networks of nodes where time dependence is a critical issue. With brains we'r discussing a very specific type of computation.

I know very well that silicon chips are capable of doing real-time processing. That's why I gave a potted history of computers, indicating the difference between the kind of computing that took place on 1970's mainframes, and what goes on on a modern multi-media PC. To understand real-time computing, you have to have a different model.
 
So you are claiming that in any real world operation, there is "something" more fundamental than the interactions between particles, and that "something" can't be accounted for on a Turing machine.

No, I'm not.

Because otherwise, if you break a real world operation down into the smallest steps possible, you end up with an ordered sequence of events -- the period between each event (what you call time) being irrelevant when it comes to the behavior of the operation as long as the order is preserved.

No, it isn't. The period between operations is fundamental to almost everything the brain does, except think about abstract philosophical problems.

Of course, two things strike anyone reading this right off the bat.

First, there is no evidence for such a fundamental "something" beyond particle interactions.

Second, the idea that there is such a "something" has a name -- dualism. Not that anyone has ever accused you of being a dualist before ...

This argument is so confused that I can't even tell where the errors are.

Let me spell it out, in simple terms. A Turing machine cannot perform time-dependent operations - because time dependence is not part of the Turing model. The brain performs time-dependent operations. Hence the functionality of the brain cannot, in principle, be solely due to the operation of a Turing machine.

Of course, a real-life computer does perform timed operations. Almost any implementation of a Turing machine can, in practice, be used to respond within a given time. However, once we use a machine in such a way, we cannot look at it's functionality as pure Turing machine functionality.

Of course, we might be able to model the brain, or any other real-time system on a Turing machine. What we can never do is model the brain as a Turing machine, because essential functions of the brain are non-Turing.

Trying to think about a real-time system as if it were a Turing machine is impossible. Hence the convolutions that Rocketdodger is going through to try to make it simply a matter of the order that actions take place in. Catching a ball is not a matter of what order one's brain instructs the arm to reach out and the hand to open. The hand has to be in the right place at the right time. Yes, a Turing machine could calculate where the hand should go - but that is only half of what the human brain does. The human brain can place the hand where it needs to be, when it needs to be. This functionality cannot be explained by thinking of the brain as a Turing machine.
 
Personally I’d say that Aku is substantially more conversant with the issues than Nescafe (not that I could claim to be reliable arbiter of the issue).
Well, no, not even by your own standards, as we will see.

From what I can see, Nescafe doesn’t seem to think that it is necessary to actually be conversant with the issues in order to instantiate consciousness (maybe ‘something’ can be created computationally [as you say Nescafe, we can give it a try and see if it says ‘hi’]…what, exactly, it would be is another matter entirely).
No.

Aku responds, quite reasonably, by asking how the hell you can instantiate anything if you havn’t a clue what it is you are instantiating (and goes to quite some lengths to demonstrate the flaws in Nescafe’s positions)?
No.

It may seem somewhat….simplistic….in comparison to so much of the debate raging here so I’ll describe the question first.

…as Aku says, there’s a lot we don’t know about consciousness. I’ll just simplify that to we don’t know what it is.
You may not.

Even I could become ever more explicit but eventually I would have to conclude that I don’t know what it is that I’m trying to be explicit about….which is the point, why not?
Have you read Godel, Escher, Bach? If not, start there. If you have, read it again.

I think I mentioned this before but this question of ‘consciousness’ is singularly unique in all of science. Partly because of the quantitative and qualitative dimensions of the subject (consciousness….even Dawkins admitted that consciousness is a special case), but also because we’re not studying something ‘outside’ of us, we are studying us. Our own experience of our ability to examine our own experience (for one thing)….so to speak.

….so, to what degree does our ‘success’ in illuminating this issue implicate a dysfunctional condition?
Unstated major premise.

In other words….do we not ‘understand’ it because we don’t know what we’re talking about, or do we not ‘understand’ it because we don’t know how to know what we’re talking about?
Projection.

I’m not talking about how to instantiate consciousness, though that issue is related. I’m talking about our ability to recognize what we are. What is consciousness? It is our ability to ask the question, is it not…..and it is also our ability to answer the question.
Yes - or close. The ability to ask the question, am I conscious - and answer it.

Of course, this is computable, and almost trivially so. Which makes consciousness almost trivial. Which is what AkuManiManu and Westprog have been so ineffectively struggling against.

An example is always useful. A seven year old wrote the following statement: “destruction is finding being in matter”.
Meaningless.

Now, regardless of your interpretation of or agreement with the meaning of that phrase, it obviously describes an extremely sophisticated perspective, and implies a substantial insight.
No.

To what degree is ‘insight’ relevant in answering the question ‘what is consciousness’…and if it is relevant…how do you introduce ‘insight’ into what is essentially an empirical equation?
Define your terms, construct a valid hypothesis, devise an empirical test.

Just as we always do.

Just one final question. If you had to quantify the dimensions of what a human being is (metaphorically I suppose…a billion computations a moment, a brazilian cells all talking to each other at once, a biological entity of unrivaled realities, the most amazing machine ever created…
The human brain is less sophisticated than the internet by any measure, and by several orders of magnitude.

by whatever it is that could create something that wonders if there are things that can create amazing machines, a metaphorical mystery of mountainous meaning, neurologically mind-numbing….that kind of thing)….how might you do it?
Measurement.

….and a related question….how might you characterize ‘our’ understanding of our typical daily lives (as in….I brushed my teeth, I had sex, I argued with some dude who names himself after a jar of coffee about what it is that is actually arguing with him…typically prosaic but occasionally life brings you to your knees kind of thing).
I'd start by asking you what that is supposed to mean.

I apologize if my semantics is (are) a bit unclear. Answer if you feel so inclined.
Define your terms. Always.
 
Define your terms you say.

“Am I conscious?” You say yes. And you say that you not only know what human consciousness is, but that it is ‘almost trivial’.

No, actually, we don’t know what it is, we guess (or theorize, or speculate, or whatever). Period.

….and Pixy will reply…’Wrong, we do know what it is’

Oh yeah Pixy….and you will, therefore, be receiving the next Nobel prize.

….prove it (read on)!

Almost trivial you say….consciousness is almost trivial. Are you, therefore, almost trivial (since you are a consciousness)? Why not, then, take your trivial abilities and go out tomorrow and create one of these ‘almost trivial’ human beings. Anything that trivial shouldn’t take more than a couple of days to produce. I mean, god did it in seven days…and you’ve already said that you do, in fact, know who and what you are….like completely or are there any details missing….nah, we’ll just have to take you at your word. But how about we give you an extra day or two over god. He did it in seven, so we’ll give you nine (trivial would be seven days, with ‘almost’ I figure you’ll need an extra two). That’s mid next week. Can you create one of these trivial consciousness things by next week? As it’s so trivial, why don’t we put some money on it. How many people here are actually willing to put their money on Pixy, that he can create an ‘almost trivial consciousness’ by next week? Y’know what Pixy, I doubt you’d find anyone willing to drop a dime on you. Actually, if I gave you ten years, I still doubt there’s anyone who’d drop a dime on you.

So what do we have to do. Scientifically.

-first, we’ve got to define exactly what Pixy is going to create for us by next week. Something called consciousness….human consciousness, to be exact (the ‘almost trivial’ kind that Pixy so specifically referred to)
-in order to create it, we’ve got to define what it is
-but keep in mind, you’re not creating something new, you’re copying something that already exists (an ‘almost trivial’ human consciousness…as you put it), and before you can ever copy something you’ve got to define what you’re copying….or else you cannot say that you are copying it (copying what?)… INO….”an almost trivial human consciousness is this, exactly….and I will be creating exactly the same thing”…signed: Pixy (define your terms…always!)
-then, when we all know exactly what it is that Pixy is going to do (this will likely occur after Pixy receives the Nobel prize for having definitively and conclusively explained human consciousness)
-Pixy will go out an do it (having created a new life form, Pixy might have to take cover as various lunatics may believe he represents the second coming).
-one almost trivial consciousness coming up….by….let’s say the end of the month, that oughta be enough time to handle something that trivial

Sorry Pixy, if I’ve got to choose between you and Chomsky, his reputation speaks for itself. From what I can see, the folks at these forums are the only one’s who’ve ever heard of you….and I really doubt that’s going to change.
 
Define your terms you say.

“Am I conscious?” You say yes. And you say that you not only know what human consciousness is, but that it is ‘almost trivial’.

No, actually, we don’t know what it is, we guess (or theorize, or speculate, or whatever). Period.

….and Pixy will reply…’Wrong, we do know what it is’
Correct.

Oh yeah Pixy….and you will, therefore, be receiving the next Nobel prize.
Not at all.

There is no Nobel Prize for defining things, nor was I the one to define it. There are Nobel Prizes for discovering how things work.

….prove it (read on)!

Almost trivial you say….consciousness is almost trivial.
Yep.

Are you, therefore, almost trivial (since you are a consciousness)?
I am not a consciousness; I am conscious.

I am a particularly complicated conscious information processing system.

Why not, then, take your trivial abilities and go out tomorrow and create one of these ‘almost trivial’ human beings.
You seem confused.

I regularly create conscious systems.

I have not yet, to my knowledge, created a human being.

Anything that trivial shouldn’t take more than a couple of days to produce.
Right.

I mean, god did it in seven days…and you’ve already said that you do, in fact, know who and what you are….like completely or are there any details missing….nah, we’ll just have to take you at your word.
Fair enough.

But how about we give you an extra day or two over god. He did it in seven, so we’ll give you nine (trivial would be seven days, with ‘almost’ I figure you’ll need an extra two). That’s mid next week. Can you create one of these trivial consciousness things by next week?
Sure. Actually, I think there's an example floating around these forums already.

As it’s so trivial, why don’t we put some money on it.
Why don't we? Because you never define your terms, is why.

How many people here are actually willing to put their money on Pixy, that he can create an ‘almost trivial consciousness’ by next week? Y’know what Pixy, I doubt you’d find anyone willing to drop a dime on you.
You'd be surprised.

Particularly since I can take a program already available on the 'net and simply offer that.

Actually, if I gave you ten years, I still doubt there’s anyone who’d drop a dime on you.
And again you'd be wrong - because you never thought to define your terms.

So what do we have to do. Scientifically. -first, we’ve got to define exactly what Pixy is going to create for us by next week.
Yes. Read back through the thread. Guess what? I have defined it, specifically, explicitly, repeatedly.

Something called consciousness….human consciousness, to be exact (the ‘almost trivial’ kind that Pixy so specifically referred to)
Whoever said human? Certainly not me. Nor you, in fact. Too late now, sorry.

-in order to create it, we’ve got to define what it is
Already done.

-but keep in mind, you’re not creating something new, you’re copying something that already exists (an ‘almost trivial’ human consciousness…as you put it), and before you can ever copy something you’ve got to define what you’re copying….or else you cannot say that you are copying it (copying what?)… INO….”an almost trivial human consciousness is this, exactly….and I will be creating exactly the same thing”…signed: Pixy (define your terms…always!)
Wrong, wrong, completely wrong, because not only did you not define your terms, you did not bother to read what I had written.

I said nothing, ever, about copying a human being, or indeed about human beings in general.

-then, when we all know exactly what it is that Pixy is going to do (this will likely occur after Pixy receives the Nobel prize for having definitively and conclusively explained human consciousness)
Explained? Did I ever say that? (Hint: No.) Can you explain Belgium? No. Can you find it on a map? (Hint: Next to France. No, the other France.)

-Pixy will go out an do it (having created a new life form
Life form?

Pixy might have to take cover as various lunatics may believe he represents the second coming).
Lunatics may belief what they like, of course.

-one almost trivial consciousness coming up….by….let’s say the end of the month, that oughta be enough time to handle something that trivial
Follow the link above. Trivial consciousness provided. Not that I expect you to pay anything, of course. I expect you to complain that it's not conscious.

Well, it is by my definition, and you never provided a definition of your own. So I win the bet by default.

Sorry Pixy, if I’ve got to choose between you and Chomsky, his reputation speaks for itself.
Indeed. His work in linguistics set the field back twenty years.

From what I can see, the folks at these forums are the only one’s who’ve ever heard of you….and I really doubt that’s going to change.
Argument by personal incredulity of popularity.
 
Last edited:
No, I'm not.

Yes, you are.

You are claiming that it is possible to change the behavior of a system without changing the order of events within the system.

Such a thing, if true, would imply that there is something more fundamental than the interactions between particles since the only way to maintain the order of events is to maintain the same interactions between the same particles.
 
We cannot answer a question unless we define the words in the question. We cannot answer ‘what is conscious’ unless we define the word ‘consciousness’.

Does the word ‘consciousness’ have the same meaning as the word ‘human consciousness’?

So go ahead, define your terms.
 
Catching a ball is not a matter of what order one's brain instructs the arm to reach out and the hand to open. The hand has to be in the right place at the right time.

Oh, I see.

The fact that the hand has to be in position to catch the ball BEFORE THE BALL ARRIVES AT THAT LOCATION has nothing to do with "order," eh?
 

Back
Top Bottom