• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Explain consciousness to the layman.

Status
Not open for further replies.
I'll join myself in pointing out that, if the thing you are emulating is computation, then in fact the emulation is the real thing.

Yes, clearly if consciousness is purely computational in nature, then one of its properties is that it can be emulated precisely. However, since the computational nature of consciousness is precisely what is in dispute, the claim has little weight.
 
If, however, we wish to model the operation of the computer in terms of its interactions in real time, the Turing model is not applicable, and we can't draw conclusions on the behaviour of the real time systems by using the Turing model. It is possible to model the behaviour of real time systems, using different models.

Nope.

All digital systems are dependent upon discrete clock events driving their computation. There is no such thing as "event based" digital calculation that only happens when some random thing happens in the external world. To the extent that "real time" events do anything, they simply set register values in various locations that the clock driven logic simply accesses on a clock cycle. That is how digital logic works. Plain and simple. No exceptions. The CPU in your computer doesn't magically just drop everything as soon as a sensor sends it a signal. It only happens on a clock cycle.

The implication of this is that since discrete events equally spaced in time *are* part of the Turing model -- in fact, the Turing machine is entirely based upon the notion that it reads and processes each tape segment as a discrete event -- all digital systems can be fully modeled by any other Turing equivalent process.

Note that this isn't even necessarily an issue because if certain physical postulates are correct the universe operates at a discrete timescale and spatial scale anyway, which would imply that all of known reality can be described using the "Turing" model of which you speak. But that doesn't have anything to do with the FACT that digital logic only takes place at discrete intervals.
 
Oh? I was not aware that *your* consciousness was connected with *my* neurons.

The fact that both of us can be conscious, yet we do not share neurons, immediately confirms that consciousness is independent of a specific physical substrate.

Both of us respire - yet we do not share lungs. However, respiration in both our cases is a well defined physical process. Respiration is dependent on a specific physical substrate- in this case, lungs. Any duplication of the physical process of respiration would have to involve a duplication of the essential elements of the lungs. It might use different means to achieve the same end, but would be restricted by the very nature of respiration.

The assertion about the nature of consciousness is that no physical restrictions apply. Any physical substrate is available, and any physical process can apply, with dimensions of space and time being entirely irrelevant.
 
No real time system performing monitoring and control can be modelled as a Turing machine. (That is not to say that it cannot be emulated as a computation - an entirely different matter). What is going on is not computation, and treating it as such is not useful or helpful. Many real time control systems have a negligible compuational element. Sometimes the response required is a simple as opening a valve when an indicator exceeds a particular value. Modelling such an interaction with a programming language - like PASCAL, say - which uses the Turing model is not possible. In order to perform such operations, languages need to add in features such as interrupts and pauses which are extraneous to that model. This also means - and this is the critical, essential element - that it is not possible to make assumptions about the behaviour of the realtime system based on reasoning using the Turing model.

You clearly haven't learned what actually happens in the microchips behind the systems you claim to have used.

Interrupts don't magically alter the timing of the digital system. Digital logic occurs in discrete vertical slices. That. Is. How. It. Works.

Even if an interrupt somehow drove a cascade of logic, maybe in an older system that wasn't synchronized to a central clock, it would still be a discrete event, the timing of which had no relevance to the timing of any other digital calculation. All that matters to the logic is the order of the cascades. Whether calculation B is halfway between A and C in time, or closer to A, or closer to C, is irrelevant. As long as it happens after A and before C the outcome is identical.

Kind of like ... the "Turing model."
 
Is this a joke? I hope so, because if it's a serious demand, that's just sad.

You see, if A is putting forward a specific claim, and B is saying that it's not well founded, then it's not - really, really not - an option for A to demand that B provide a scientific proof of an alternative to that specific claim. At the very least, the burden of scientific proof lies with the person putting forward a specific claim, not the person saying that a particular claim is unproven, and presenting alternatives as possibilities.

Not at all.

A scientific proof of your claim is as simple as showing in some way that is even remotely formal from a logical perspective that while neurons can support consciousness, for example, simulated neurons cannot.

I don't see the issue there, if you actually have reasons for your arguments it should be easy to just create a formal structure from them.

I note, in passing, that there hasn't been the slightest indication of a scientific proof of the computational nature of consciousness.

Other than all the known research, yes, I agree. Not the slightest indication.

For example, the fact that every known internal function of the neuron can be emulated by computation is not the slightest indication.

Neither, for example, is the fact that every known interaction between multiple neurons can be emulated by computation.
 
The assertion about the nature of consciousness is that no physical restrictions apply.

That is simply a lie.

The assertion is that there are physical restrictions limiting the substrate to something that can support computation of a certain complexity.

That is a very well defined restriction, despite your absurd claim that a bowl of soup can compute just as well as IBM's Watson supercomputer. The fact that you are unable to understand this restriction doesn't imply the restriction is not valid, no more than my lack of precise knowledge regarding quantum physics would invalidate the physical restrictions that define what a superconductor is.
 
Yes, clearly if consciousness is purely computational in nature, then one of its properties is that it can be emulated precisely. However, since the computational nature of consciousness is precisely what is in dispute, the claim has little weight.

I don't see why it's in dispute, and you've done a pretty poor job of explaining it.
 
Both of us respire - yet we do not share lungs. However, respiration in both our cases is a well defined physical process. Respiration is dependent on a specific physical substrate- in this case, lungs. Any duplication of the physical process of respiration would have to involve a duplication of the essential elements of the lungs.

But not necessarily biological lungs, I'd add.

The assertion about the nature of consciousness is that no physical restrictions apply. Any physical substrate is available, and any physical process can apply, with dimensions of space and time being entirely irrelevant.

I don't think anyone is saying that. The contention that a biological brain is required, however, has been uttered more than once.
 
I don't see why it's in dispute,

Because it's an unproven hypothesis. Some very clever, educated people think the hypothesis is true, and some other very clever, educated people think it's false or unproven.

and you've done a pretty poor job of explaining it.

Maybe you've done a pretty poor job of understanding it.
 
But not necessarily biological lungs, I'd add.

One could produce an overly-restrictive definition of respiration that would require biological lungs, but that would lead to less rather than more understanding of the phenomenon.

I don't think anyone is saying that. The contention that a biological brain is required, however, has been uttered more than once.

By whom? On this thread?
 
The computational theories claim that consciousness is entirely independent of any possible physical substrate. Consciousness would be created by computation done with colliding asteroids or packs of cards in exactly the same way. There is nothing happening in the brain which creates consciousness which is inherent to it.

This is obviously entirely different to any other biological process, and is clearly not physical in the same sense as, say, respiration.

The alternative to this is to assume that consciousness is tied to some specific action of the brain, in the same way that respiration is tied to the passage of oxygen atoms through the lungs.

Good to see your knowledge of respiration is on par with your knowledge of computation.

Fish respirate. Insects respirate. Plants respirate. None of these have lungs.
 
I'll join westprog in pointing out that emulation on a computer does not equal, or even imply, replication vis-a-vis consciousness or any IRL event.

Here is a thought experiment I once wrote up for another thread, that I think is relevant, here. ( http://www.internationalskeptics.com/forums/showthread.php?p=7639678 ) Unfortunately, it will take a few paragraphs to explain.

You can replace the word "sentience" with "consciousness" or "qualia" or whatever other term you need.


Let us suppose that scientists actually discover what sentience actually is: The whole process by which it happens, etc. For this exercise, the exact details are not important, for us. But, we can feel free to speculate on a general approach: There might be a mechanism by which small amounts of chaos are introduced into a neural network, allowing for fleeting moments of independent action (seemingly random to outside parties). That, plus other well-studied ingredients might all be what is necessary for a complete working sentient entity. I will call it the MB-Trouble Algorithm, since it was inspired by the pop-up dice dome in that board game. It is important to emphasize that ALL of the details are known in the universe of this experiment, even if they are not all known to us readers.

In the same universe as this thought experiment, there are humans who (for whatever reason) volunteered to allow their brains to be manipulated, to test the MB-Trouble model. The exact physiological structures are discovered, and messed with in the lab. And, every time it happens: Sentience fails in that human in precisely the way the model would predict. Pulling on one thing causes them to act more like a chat bot, with no actual understanding of what is going on. Pushing on another causes them to behave more like a Chinese room: They might have in internal understanding of things, but much of their communication is clearly done without it. We can assume the experimental protocols are solid. So, they know it is a good model that applies to human sentience. (See my ideas for testing sentience in a prior post.)

Now comes the grand day, when they simulate this MB-Trouble Algorithm in a computer system. Remember that they are ONLY building a MODEL of MB-Trouble. They are not emulating every single molecule of every single cell of every single neural process, etc. They aim only to simulate its principal ingredients: an abstraction of the chaos-inducer, plus all the other necessary ingredients I could not actually name, yet.

And, in this universe, the Algorithm works as predicted: The simulation is able to pass any and all tests for sentience you could name: Turing tests, mirror-recognition tests, novel problem solving skills, random number choosing, etc. And, of course, when the simulation happens to be broken in one spot, it fails the same way humans did, when that part was broken in their brains.

In this thought experiment there is NO DOUBT we have simulated the VERY THING that makes "understanding", "meaning" , "semantics", "sentience", "qualia", "consciousness" and "strong intelligence" actually happen.

What, then, is the difference between this simulation of sentience, and that which is found in natural, organic humans?!



It's also getting laughable that the claim any Turing machine can do it, then admitting Turing machines do not and cannot exist; yes, universal computing machines are an IRL implementation of a theoretical Turing machine.
I know Universal Computing Machines are an implementation of Turing Machines. (To me, the words are practically synonymous.) But, since other people took issue with the word "Turing", and this thread isn't about "Turing", I decided to pick my battles and move on to other things to say about the topic of the opening post.
 
Note that this isn't even necessarily an issue because if certain physical postulates are correct the universe operates at a discrete timescale and spatial scale anyway, which would imply that all of known reality can be described using the "Turing" model of which you speak.

That is simply a lie.

The assertion is that there are physical restrictions limiting the substrate to something that can support computation of a certain complexity.

That is a very well defined restriction, despite your absurd claim that a bowl of soup can compute just as well as IBM's Watson supercomputer. The fact that you are unable to understand this restriction doesn't imply the restriction is not valid, no more than my lack of precise knowledge regarding quantum physics would invalidate the physical restrictions that define what a superconductor is.

Is it just me, or are there two contrary assertions here?

Yes, it would be possible to at least try to model a bowl of soup as a Turing machine, but it would be a pointless thing to do. The model is inappropriate and unhelpful. The fact that one can apply a model doesn't mean that we learn anything by doing so.
 
Here is a thought experiment I once wrote up for another thread, that I think is relevant, here. ( http://www.internationalskeptics.com/forums/showthread.php?p=7639678 ) Unfortunately, it will take a few paragraphs to explain.

You can replace the word "sentience" with "consciousness" or "qualia" or whatever other term you need.


Let us suppose that scientists actually discover what sentience actually is: The whole process by which it happens, etc. For this exercise, the exact details are not important, for us. But, we can feel free to speculate on a general approach: There might be a mechanism by which small amounts of chaos are introduced into a neural network, allowing for fleeting moments of independent action (seemingly random to outside parties). That, plus other well-studied ingredients might all be what is necessary for a complete working sentient entity. I will call it the MB-Trouble Algorithm, since it was inspired by the pop-up dice dome in that board game. It is important to emphasize that ALL of the details are known in the universe of this experiment, even if they are not all known to us readers.

In the same universe as this thought experiment, there are humans who (for whatever reason) volunteered to allow their brains to be manipulated, to test the MB-Trouble model. The exact physiological structures are discovered, and messed with in the lab. And, every time it happens: Sentience fails in that human in precisely the way the model would predict. Pulling on one thing causes them to act more like a chat bot, with no actual understanding of what is going on. Pushing on another causes them to behave more like a Chinese room: They might have in internal understanding of things, but much of their communication is clearly done without it. We can assume the experimental protocols are solid. So, they know it is a good model that applies to human sentience. (See my ideas for testing sentience in a prior post.)

Now comes the grand day, when they simulate this MB-Trouble Algorithm in a computer system. Remember that they are ONLY building a MODEL of MB-Trouble. They are not emulating every single molecule of every single cell of every single neural process, etc. They aim only to simulate its principal ingredients: an abstraction of the chaos-inducer, plus all the other necessary ingredients I could not actually name, yet.

And, in this universe, the Algorithm works as predicted: The simulation is able to pass any and all tests for sentience you could name: Turing tests, mirror-recognition tests, novel problem solving skills, random number choosing, etc. And, of course, when the simulation happens to be broken in one spot, it fails the same way humans did, when that part was broken in their brains.

In this thought experiment there is NO DOUBT we have simulated the VERY THING that makes "understanding", "meaning" , "semantics", "sentience", "qualia", "consciousness" and "strong intelligence" actually happen.

What, then, is the difference between this simulation of sentience, and that which is found in natural, organic humans?!




I know Universal Computing Machines are an implementation of Turing Machines. (To me, the words are practically synonymous.) But, since other people took issue with the word "Turing", and this thread isn't about "Turing", I decided to pick my battles and move on to other things to say about the topic of the opening post.

Whether or not such a system would have the same subjective experience as a human being is still open - but it would certainly be evidence in its favour. Would it also be evidence if such a project were undertaken, and despite huge resources being allocated, it failed to pass the assorted tests for sentience? Would such a result affect anyone's confidence that sentience was in fact computational?
 
Good to see your knowledge of respiration is on par with your knowledge of computation.

Fish respirate. Insects respirate. Plants respirate. None of these have lungs.

Which of course in no way invalidates the analogy, but I'm sure it gives you a nice well-informed glow. "Hey, I made an irrelevant point on the Internet and threw in a gratuitous insult." "High five, dude!"
 
Is it just me, or are there two contrary assertions here?

A system modeled/described/emulated by computations is not a system that is itself performing computations.

No more than you are performing particle interactions because the particles that make you up are interacting.

Yes, it would be possible to at least try to model a bowl of soup as a Turing machine, but it would be a pointless thing to do. The model is inappropriate and unhelpful. The fact that one can apply a model doesn't mean that we learn anything by doing so.

Except if the model itself has something to say about it. Which is rather the point in this context -- if a model of a consciousness is claiming it is conscious, what do you do?
 
Whether or not such a system would have the same subjective experience as a human being is still open - but it would certainly be evidence in its favour. Would it also be evidence if such a project were undertaken, and despite huge resources being allocated, it failed to pass the assorted tests for sentience? Would such a result affect anyone's confidence that sentience was in fact computational?

Of course.

But that hasn't happened yet.

You seem to think the rather pathetic attempts of the last 30 years constitute "huge resources."

Here is a hint: despite what people may tell the press, no computer science researcher has thus far seriously considered any of these attempts to be even remotely close to what is needed to actually do the job.
 
Would it also be evidence if such a project were undertaken, and despite huge resources being allocated, it failed to pass the assorted tests for sentience? Would such a result affect anyone's confidence that sentience was in fact computational?
It would certainly cause everyone in that Universe to rethink the idea, at the very least.

The reasons for the failure would have to be ironed out. If it turns out that they discover some aspect of sentience that can never be computed (which seems absurd, but we can roll with it for the sake of argument), then they would have to announce, definitively, that sentience could not be computational.

But, unless such a thing were found, I imagine the people in that Universe would continue marching on, trying to figure out how to get the computations to work.
 
Pebbles are not BIOLOGICAL things and thus they require a motivator. Biological things move and grow and are ACTIVE PROCESSES in and of themselves (due to active chemical and electrical engines).

A human brain is a HUMONGOUS biological PROCESS that acts and reacts on its intertwined parts with side-effects and due to muscles there are also side-effects on the environment outside the brain bundle which in turn cause environmental changes that cause effects and side-effects on the brain bundle.

These POSITIVE and NEGATIVE FEEDBACK effects can cause cascading and diverging as well as stable and unstable loops and sub-loops and the whole thing becomes a mess of DYNAMIC CONSTRAINTS.
Yup.

So the human brain does NOT require a MOTIVATOR let alone one with a consciousness for us to inherit from.
I did not mention human brain; I did mention sentient entity. Please re-read what you responded to. I would appreciate an on-point answer as to where in the pebble scenario consciousness is found, since I'm confident we agree pebbles are neither sentient nor conscious and in fact will never become so.
 
Status
Not open for further replies.

Back
Top Bottom