• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Explain consciousness to the layman.

Status
Not open for further replies.
I'm sure that there are some people claiming that non-biological consciousness is impossible - but I've never claimed it, you've never claimed it, and Leumas hasn't - but that's the argument that our opponents like to rebut. I wonder why?

Well, if you're in the bio faction, you can't be in a position to make such a claim. It would be contradictory to the neurobio approach and findings.
 
None of that has anything to do with consciousness, dodger.

None of those functions requires consciousness, and the people building the brain sims are not claiming that the resulting sims would be conscious.

And my phone "understands" what I'm saying in the same way that my door knob understands what my key is doing. This also has nothing at all to do with consciousness.

The iPhone does what all voice analysis software does - does pattern matching on the DAC input, pattern matches it, and performs a corresponding action. Pattern matching is what Colossus did, and that wasn't even a general purpose computer.

Saying that the iPhone "understands" what is said to it is - well, I don't want to trigger another thread purge - similar to any other attribution of human qualities to inanimate objects. Teddy doesn't love you, I'm afraid.

Computers are doing useful, incredible things - but they continue to be computations. They can crunch numbers for scientists, and do it faster every year, but compose a sonnet or come up with an original idea?
 
None of that has anything to do with consciousness, dodger.

None of those functions requires consciousness, and the people building the brain sims are not claiming that the resulting sims would be conscious.

And my phone "understands" what I'm saying in the same way that my door knob understands what my key is doing. This also has nothing at all to do with consciousness.

I didn't claim they had anything to do with consciousness.

I was specifically responding to westprog's outdated assertions about the capability of computers to perform tasks once thought limited to humans.
 
The important thing about relativity as it relates to this discussion is that systems operate according to their own perspective. The way a lathe operates is due to the fact that all the pieces and particles in it are in the same location, moving at the same speeds (relativistically speaking). If we were to distort the machine in its own frame of reference then it would stop working. (And that the machine has a frame of reference does not imply that it has "experiences").

Yet, you have been claiming from day 1 that software could not be conscious unless it interacts with an external frame of reference.
 
I understand it perfectly.

But if you think a vaporized truck that's spread across the galaxy can still haul brush, or an arm that's vaporized and spread across the galaxy can still lift a cup of coffee, you've taken leave of your senses.

It doesn't matter that "the interactions between the particles remain functionally equivalent".

Once you spread my exhaust system's particles across the galaxy, they cease to be usable for the work they used to do. Ditto for the spark plugs and pistons.

That's because they no longer form the object which they used to form which was able to do the work it was able to do in the world.

The fact that the particles are still dancing in relation to each other by magic doesn't change that.

ETA: I'll give you a choice... I'll either fire a bullet at your chest with my Ruger, or I'll fire a vaporized bullet at your chest (consisting of particles spread across light years which are behaving, relative to each other, the way the particles in the non-vaporized bullet are behaving). Do you have a preference?

I bolded the sentence that you seem to be misfiring on piggy.

I clearly said that the particles are "still dancing" in relation to each other and any external particles that interact with them.

Do you understand now? The arm *can* still lift a cup of coffee because the machine makes sure the particles of the coffee cup interact with the now distributed particles of the arm just like they did before.

If you don't want to accept this hypothetical, then fine, but please at least fully understand it first.
 
What happens at near-light speed is simply irrelevant to a discussion of what will happen to an object if you vaporize it.

It's not a question of looking at one type of "gross deformity" and reasoning from there to an entirely different type of deformity.

From the perspective of an observer, your body moving at 0.99c will be distorted. Distorted to the point of it appearing like a "magical machine" was keeping the particles functioning normally -- very similar to what I am talking about.

You don't agree? What part of the above statement do you disagree with?
 
The iPhone does what all voice analysis software does - does pattern matching on the DAC input, pattern matches it, and performs a corresponding action. Pattern matching is what Colossus did, and that wasn't even a general purpose computer.

Pattern matching is also what a large number of biological neural networks -- like, many of the ones in your brain -- do.

So, what was your point ?
 
I didn't claim they had anything to do with consciousness.

I was specifically responding to westprog's outdated assertions about the capability of computers to perform tasks once thought limited to humans.

Which is irrelevant to the discussion of consciousness.

I'm sure that people once doubted that a machine could be built which blows up entire cities, or flies to the moon... but I doubt you'll be able to achieve either of these things via programming alone either... and in any case this is not an argument in favor of one specific type of machine (a computer) being able to do one specific task (performing experience) which one specific organ of the human body (the brain) performs.

This is basically just a "well, surprising things have happened before" argument.
 
I bolded the sentence that you seem to be misfiring on piggy.

I clearly said that the particles are "still dancing" in relation to each other and any external particles that interact with them.

Do you understand now? The arm *can* still lift a cup of coffee because the machine makes sure the particles of the coffee cup interact with the now distributed particles of the arm just like they did before.

If you don't want to accept this hypothetical, then fine, but please at least fully understand it first.

So... with your magical machine... I vaporize my truck as it's en route to the composter, and although it vanishes -- and in fact, the space it occupied is now literally a void because particles cannot move in to fill the space it left -- the load of brush keeps moving down the road, and light keeps reflecting off the void as if the truck were there, and the void emits sounds as if the truck were there, and if the void runs over a possum, the possum dies....

At this point, I have to ask you what in the world you're getting at with all this.

Why not instead simply send the truck or the brain -- without passing the event horizon or collecting $200 -- intact into a black hole, leaving a void in its place and propose a magic machine that allows it to interact in the way you're talking about?

What do you think this thought experiment illustrates?

What do you think we'll be left with once we remove the magic machine from the picture?
 
From the perspective of an observer, your body moving at 0.99c will be distorted. Distorted to the point of it appearing like a "magical machine" was keeping the particles functioning normally -- very similar to what I am talking about.

You don't agree? What part of the above statement do you disagree with?

What part of the above statement is relevant to what we're talking about?
 
Pattern matching is also what a large number of biological neural networks -- like, many of the ones in your brain -- do.

So, what was your point ?

The point is that most of what the brain does is not involved in performing experience, so merely drawing parallels to brain processes doesn't necessarily mean you're drawing parallels to consciousness.
 
Which is irrelevant to the discussion of consciousness.

It is entirely relevant to a discussion about the progress made in in A.I. over the last few decades, which happens to be one of the discussions interleaved in this thread.
 
The point is that most of what the brain does is not involved in performing experience, so merely drawing parallels to brain processes doesn't necessarily mean you're drawing parallels to consciousness.

Man I really feel like Walter from "The Big Lebowski" speaking to Donnie when I argue with you piggy.

Case in point, "you're out of your element." "You have no frame of reference."

I made a specific response to a specific post of westprog's, in reference to an earlier set of posts, the context of which is simply 'how many tasks that only a few years ago people thought computers would never do.'

If an iphone pattern matches in order to talk back to people, and an argument uses this fact to somehow suggest that the iphone in fact doesn't do the task that only a few years ago people thought computers would never do, then such an argument is invalid -- because people pattern match as well.

I accept the argument that the iphone doesn't feature all the elements of human consciousness, but that was never the issue in contention.

Furthermore, is the remainder of this thread gonna be little more than you basically championing westprog's posts? That is getting really old, FYI.
 
What do you think this thought experiment illustrates?

I am trying to find a common ground with you regarding some fundamental concepts.

Do you or do you not accept the premise and the implications thus far discussed?
 
That seems to be the gist of your argument - that we've claimed in the past, that certain things are unique to human intelligence - and they turned out not to be.
Yes, that is the gist of my argument. I objected to your original misrepresentation of it.

The point I was making is that generally, the things that computers turn out to be good at were always recognised as being suitable for machines. It was always credible that a machine could play chess. The game is inherently digital in nature, it has consistent and well defined rules - it almost demands the idea that a perfect strategy exists.
Yes, there were always people who saw the potential, but their views were not widely accepted at the time.

The things that computers are bad at but humans are good at - we are still waiting for computers to get good at them, with no particular reason to assume that they will.
It's 'just' a question of finding the best approach and having the resources to implement it.

yet they devoted the resources to getting a chess playing program written, rather than a program to carry out a conversation.
Are you aware of what Watson is? It's not a chess program.

Of course there are doubts in any engineering project, but if they told their managers "I don't know why we're working on this, it's clearly impossible" I'd be surprised.
That's close to how they describe it. The idea was to see how far they could get. they exceeded all expectations.

That's very interesting. In spite of thousands of man years of work trying to get a program that could figure out what a particular phrase meant, Google decided that the only way was brute force sampling. Semantic and grammatical analysis - actual machine understanding - was a dead end. This is in spite of it being a Holy Grail for many, many years. Computers were going to stop trying to understand language, and just pass it on.
It works. That's the criterion evolution uses. There is a place for finesse, and a place for brute force sampling. They'll introduce (if they haven't already) syntax & grammar based parsing to resolve the ambiguities. The brain is an efficient parallel processing pattern matching machine, it does a similar brute-force pattern matching on language input - do you really think it could get by just by parsing the grammar, syntax, and semantics of what you read? The errors it makes ought to tell you otherwise.
 
Last edited:
It makes the simulation real. Obviously it's doing something real.

Sorry, I really don't know what you mean by this. How does the involvement of a conscious mind make the simulation real? It's a real program running on real hardware. Suppose that when they test it, they feed either recorded input from trials with a human user, or they feed it input from a hand-assembled sequence, or they feed it a random input sequence, or they feed it input from an AI built for that purpose, or a junior programmer feeds in input from a script, or a junior programmer exercises the interface as he/she sees fit... which of those makes it 'real'?

I just can't see what you mean by 'real' in this context. Please explain.
 
Last edited:
And what makes those states representative, apart from the presence of a conscious mind?
They are representative of some stimulus to be processed. Consciousness is irrelevant at this level. Input signals are translated to output signals.

We've been over all this already. A cockroach brain doesn't need a conscious mind to process sensory input and produce cockroach behaviours, and a microprocessor doesn't need a conscious mind to control a mechanical cockroach.
 
It seems that dlorde is using an idiosyncratic definition of consciousness which, like PixyMisa's, does not actually distinguish consciousness from non-conscious activity in the brain.
My post concerned the historic development of certain computer systems in relation to what was deemed to constitute intelligence at the time, and how the lessons learned might relate to the future development of potentially conscious computer systems.

If you somehow read that as providing an 'idiosyncratic definition of consciousness which, ... does not actually distinguish consciousness from non-conscious activity in the brain', then I suggest you urgently seek medication.
 
Status
Not open for further replies.

Back
Top Bottom