• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Explain consciousness to the layman.

Status
Not open for further replies.
Any particular thought of a brain in a vat can not be about any thing at all, of course.

Well, that is one way to look at it.

The other way is to re-think what your notion of "about" means.

If your worldview depends on us not being in a simulation, and there is no way to prove we are not in a simulation, then your worldview is not logically consistent.

For example, when westprog was asked about this, his reply was "if we are in a simulation, then we are not really conscious." I don't understand the utility in a viewpoint that places us among "really conscious" or "not really conscious" depending on circumstances beyond our control when it is clear to all of us that we are "conscious" regardless.
 
"external" is relative.
External to our 'I-ness' 'me-ness' 'mine-ness' vs not I-me-mine.

So all you need is a computer that differentiates either as hard code or as the primary neural net, and added neural net training can proceed. IMO verbalization should be the first I/O tackled & trained after the computer's 'I-ness' has been established, sight-sound-touch-smell?-taste? to follow.
 
Last edited:
It is exactly the same argument. Penrose/Lucas argue that since humans have access to mathematical truth that cannot be gleemed using any algorithm, our brains must not follow an algorithm.

And it fails in exactly the same way -- we don't have intuitive access to mathematical truth, we have access to what we think is mathematical truth, and the latter can certainly be gleemed using an algorithm.

Yes but don't confuse a demonstration/consequence with a proof. How can a computer/algorithm come up with new proofs? You can't have an algorithm for coming up with a proof, only one that provides a demonstration or shows a consequence.

Maths is a broad church. Computers are good (much better than us) for calculating. They're not good at coming up with new proofs.
 
External to our 'I-ness' 'me-ness' 'mine-ness' vs not I-me-mine.

So all you need is a computer that differentiates either as hard code or as the primary neural net, and added neural net training can proceed. IMO verbalization should be the first I/O tackled & trained after the computer's 'I-ness' has been established, sight-sound-touch-smell?-taste? to follow.
As an interesting addendum ...

http://en.wikipedia.org/wiki/Philosophy_of_artificial_intelligence

Can a machine have emotions?

AE- or Artificial Emotion- is an underlying aspect of the concept of AI that has been generally granted without being seriously examined either philosophically or ontologically. The evolution of sensation in the animal world is the root of what are called "emotions" or, more nebulously, "feelings". Without a basis in organic neural sensation, any synthetic/artificial/android/mechanical AI constructs will be unable to "feel" anything, or register this foundational aspect of what "emotions" arise from: sensitivity registered in the mind to changes in the environment: heat or cold, wind or stillness, noise or quiet, threatening touch as opposed to tickling, ad infinitum.

The simulation of "emotions" can be managed superficially, but any synthetic/artificial/ android/mechanical AI constructs programmed to falsify a smile or tears or any other "emotion" will have no "feeling", as such, regarding these "behaviors" since there is no nervous system at work, and no evolution of instinctual "affects" to ground the pseudo-epiphenomenon arranged by their programmers to mimick human "emotions".

AE is the more dangerous and unsettling aspect of the field of AI than the ongoing uneasy concern with the "threat" inherent in growing machine "intelligence", because AE is a simpler method of co-opting human empathy and gaining a foothold in the world. The human species has been primed for this "imaginary friend" prospect through millennia of children playing with dolls and attributing "emotions" to them in fantasy relationships. Similarly, AI constructs made to seem "personal" or "caring" can bypass the natural skeptical filters of our species, when confronting synthetic "creatures", more easily than AI's trying to simulate "intelligence". Once the AI constructs gain more and more "emotional" value to people- as "para-pals", "caregivers", "sexbots", etc., it will be harder to control their spread throughout society and to rein in their infiltration of the human realm. Once in place, and once so "valued" affectively, if they then developed anything approaching "intelligence", even of the insect level, their danger as a new competitor in the world's manifold will become apparent.

Hans Moravec believes that "robots in general will be quite emotional about being nice people"[59] and describes emotions in terms of the behaviors they cause. Fear is a source of urgency. Empathy is a necessary component of good human computer interaction. He says robots "will try to please you in an apparently selfless manner because it will get a thrill out of this positive reinforcement. You can interpret this as a kind of love."[59] Daniel Crevier writes "Moravec's point is that emotions are just devices for channeling behavior in a direction beneficial to the survival of one's species."[60]

The question of whether the machine actually feels an emotion, or whether it merely acts as if feeling an emotion is the philosophical question, "can a machine be conscious?" in another form.[38]
 
Yes but don't confuse a demonstration/consequence with a proof. How can a computer/algorithm come up with new proofs? You can't have an algorithm for coming up with a proof, only one that provides a demonstration or shows a consequence.

Maths is a broad church. Computers are good (much better than us) for calculating. They're not good at coming up with new proofs.

The use of computers in mathematics has turned out to be extremely limited. The kind of reasoning that mathematicians do is not especially helped by computerisation.
 
Yes but don't confuse a demonstration/consequence with a proof. How can a computer/algorithm come up with new proofs? You can't have an algorithm for coming up with a proof, only one that provides a demonstration or shows a consequence.

Maths is a broad church. Computers are good (much better than us) for calculating. They're not good at coming up with new proofs.

Nonsense.

I can write an algorithm that is capable of generating the same type of proofs that high school geometry students come up with on tests. I can write it right now. And it would generate proofs just as good as most of those high school students.

I say that because I know for a fact that the sequence of steps I would go through in those classes was algorithmic -- I had an established set of truths that I already knew, I had a goal, and I tried to figure out how to reach that goal using a sequence of those established truths, possibly generating new intermediate truths as I went.

In fact a computer would do exceptionally well at such a task, because it is just searching through a tree of logical steps, and when you get to the goal you are done. A basic shortest-paths algorithm with a suitable "distance" metric would totally do it. And you could do it just like any other path searching task -- start at the beginning and go towards the goal, start at the goal and work backwards, or start at both and work in each direction simultaneously.

When you get into the stuff that grad students and professors come up with, well, that is much more complex, but I fail to see how it is any fundamentally different. What evidence do you have that the "intuition" of math experts and geniuses is any more than just such a strong understanding of the "components" you can use to build a proof that it come almost subconsciously to them? There is no evidence that such people don't generate proofs exactly the same as high school students, fundamentally.

The fact is, there is zero proof that humans are capable of just instantly coming up with a complex proof through nothing but intuition -- despite what people like Penrose seem to think. I have never bought into that. To the extent that "intuition" plays any role in mathematical thinking I suspect it is more like a way to change directions rather than instantly get you where you are going.
 
Last edited:
The use of computers in mathematics has turned out to be extremely limited. The kind of reasoning that mathematicians do is not especially helped by computerisation.

Not entirely correct.

The level of reasoning that mathematicians are capable of has not been "computerized" yet.

However a computer can certainly find you a route from point A to point B faster than any person, and if you have evidence that mathematical reasoning is somehow a different process than routing I would love to hear it. So I doubt you will be able to make the same statement in 50 years.
 
Not entirely correct.

The level of reasoning that mathematicians are capable of has not been "computerized" yet.

However a computer can certainly find you a route from point A to point B faster than any person, and if you have evidence that mathematical reasoning is somehow a different process than routing I would love to hear it. So I doubt you will be able to make the same statement in 50 years.

You don't seem to get it do you.

Maths was invented by humans.

Talk about believing in magic beans.
 
Half correct.

There may be a way to determine that we are brains in vats, at least as far as we can determine anything else about our reality, and certainly a technician running the vat lab could try to convince us by just communicating like God from the heavens or whatever, or even hooking up our perception to a camera in the vat lab and saying "see this is you, just a brain."

But there is no way to confirm that we are not brains in vats.

As a corollary, if we figured out that we were indeed brains in vats, we could not confirm that the vat lab was not also just the imagination of another brain in a vat, and so on and so forth.

In terms of simulations, even if you could "escape" to the external reality, there is no way to be sure that you are at the very top level I.E. the "true" reality.

Note that this doesn't change anything with regard to what we know of reality, it is just philosophical musing.

You have just given a close analogy to the spiritual anthropogenesis, in which the soul is the brain in the vat and what we perceive and experience as our physical body is the simulated real world your soul experiences.
 
Last edited:
Nonsense.

I can write an algorithm that is capable of generating the same type of proofs that high school geometry students come up with on tests. I can write it right now. And it would generate proofs just as good as most of those high school students.

I say that because I know for a fact that the sequence of steps I would go through in those classes was algorithmic -- I had an established set of truths that I already knew, I had a goal, and I tried to figure out how to reach that goal using a sequence of those established truths, possibly generating new intermediate truths as I went.

That's because you are using the term 'proof' incorrectly. The proof is in deciding how to generate the algorithm. The computer doesn't decide how to generate the algorithm. A computer could not invent a proof for the existence of infinite primes, for example. It's not an autopoietic, evolving system.
 
Not entirely correct.

The level of reasoning that mathematicians are capable of has not been "computerized" yet.

However a computer can certainly find you a route from point A to point B faster than any person, and if you have evidence that mathematical reasoning is somehow a different process than routing I would love to hear it. So I doubt you will be able to make the same statement in 50 years.

Calculation is not mathematics just as a dictionary is not language.
 
The use of computers in mathematics has turned out to be extremely limited. The kind of reasoning that mathematicians do is not especially helped by computerisation.


Indeed. Deep Blue and Dr. Fill may do well on calculating tasks but not generating new knowledge. Intelligence is not imagination.
 
That's because you are using the term 'proof' incorrectly. The proof is in deciding how to generate the algorithm. The computer doesn't decide how to generate the algorithm. A computer could not invent a proof for the existence of infinite primes, for example. It's not an autopoietic, evolving system.
At least no one has yet proposed that. I'm waiting for it though; come on, Pixy.
 
You don't seem to get it do you.

Maths was invented by humans.

Talk about believing in magic beans.

Ok, but do you fully understand what that means?

Mathematics is just a language we came up with to describe classes of behavior we observe the entities around us exhibiting.

Regardless of the language, that behavior is the same, and any entity capable of generating symbols to represent equivalence classes of percepts is capable of recognizing it as such.

The fact that in our mathematics a triangle's angles sum to 360 is one and the same fact that when a real entity follows a path that we could imagine was specified by the edges of a triangle it will arrive back at its original facing -- every time. You don't need to know anything about mathematics to observe this. That is just how reality works, and we certainly didn't invent that.

Of course that is irrelevant to the discussion at hand, because there is no computer capable of connecting those dots like we have. However, there are computers capable of connecting a few very fundamental dots, and I don't see why connecting lots of dots is qualitatively different from connecting a few dots.
 
That's because you are using the term 'proof' incorrectly. The proof is in deciding how to generate the algorithm. The computer doesn't decide how to generate the algorithm. A computer could not invent a proof for the existence of infinite primes, for example. It's not an autopoietic, evolving system.

Well, I am a professional programmer, and I happen to think that algorithm generation is also algorithmic. I can't think of an algorithm I ever generated that I didn't do so using an algorithmic thought process.

Furthermore, I can generate an algorithm that generates algorithms.

So, I don't agree that I am using the term 'proof' incorrectly.

Finally, you are wrong -- there is no reason a program cannot be autopoietic and evolving. In fact just google "autopoeitic computing."

Of course I admit that there is no current program that is both autopoeitic and capable of generating somethign like Euclid's proof of infinite primes, but I honestly don't see why it isn't possible in principle.

In fact this weekend I will mediate on it and hopefully by Monday I might have something like a high-level description for an algorithm that could do it. Whether or not such an algorithm is feasible to implement is another story.
 
Status
Not open for further replies.

Back
Top Bottom