Frank Newgent
Philosopher
- Joined
- Sep 4, 2002
- Messages
- 7,516
Not much of a brain if it can't think.
About what? Nothing is external to a brain in a vat. Not to mention what would be BIV's lack of intentionality.
Not much of a brain if it can't think.
Any particular thought of a brain in a vat can not be about any thing at all, of course.
About what? Nothing is external to a brain in a vat. Not to mention what would be BIV's lack of intentionality.
External to our 'I-ness' 'me-ness' 'mine-ness' vs not I-me-mine."external" is relative.
It is exactly the same argument. Penrose/Lucas argue that since humans have access to mathematical truth that cannot be gleemed using any algorithm, our brains must not follow an algorithm.
And it fails in exactly the same way -- we don't have intuitive access to mathematical truth, we have access to what we think is mathematical truth, and the latter can certainly be gleemed using an algorithm.
As an interesting addendum ...External to our 'I-ness' 'me-ness' 'mine-ness' vs not I-me-mine.
So all you need is a computer that differentiates either as hard code or as the primary neural net, and added neural net training can proceed. IMO verbalization should be the first I/O tackled & trained after the computer's 'I-ness' has been established, sight-sound-touch-smell?-taste? to follow.
Can a machine have emotions?
AE- or Artificial Emotion- is an underlying aspect of the concept of AI that has been generally granted without being seriously examined either philosophically or ontologically. The evolution of sensation in the animal world is the root of what are called "emotions" or, more nebulously, "feelings". Without a basis in organic neural sensation, any synthetic/artificial/android/mechanical AI constructs will be unable to "feel" anything, or register this foundational aspect of what "emotions" arise from: sensitivity registered in the mind to changes in the environment: heat or cold, wind or stillness, noise or quiet, threatening touch as opposed to tickling, ad infinitum.
The simulation of "emotions" can be managed superficially, but any synthetic/artificial/ android/mechanical AI constructs programmed to falsify a smile or tears or any other "emotion" will have no "feeling", as such, regarding these "behaviors" since there is no nervous system at work, and no evolution of instinctual "affects" to ground the pseudo-epiphenomenon arranged by their programmers to mimick human "emotions".
AE is the more dangerous and unsettling aspect of the field of AI than the ongoing uneasy concern with the "threat" inherent in growing machine "intelligence", because AE is a simpler method of co-opting human empathy and gaining a foothold in the world. The human species has been primed for this "imaginary friend" prospect through millennia of children playing with dolls and attributing "emotions" to them in fantasy relationships. Similarly, AI constructs made to seem "personal" or "caring" can bypass the natural skeptical filters of our species, when confronting synthetic "creatures", more easily than AI's trying to simulate "intelligence". Once the AI constructs gain more and more "emotional" value to people- as "para-pals", "caregivers", "sexbots", etc., it will be harder to control their spread throughout society and to rein in their infiltration of the human realm. Once in place, and once so "valued" affectively, if they then developed anything approaching "intelligence", even of the insect level, their danger as a new competitor in the world's manifold will become apparent.
Hans Moravec believes that "robots in general will be quite emotional about being nice people"[59] and describes emotions in terms of the behaviors they cause. Fear is a source of urgency. Empathy is a necessary component of good human computer interaction. He says robots "will try to please you in an apparently selfless manner because it will get a thrill out of this positive reinforcement. You can interpret this as a kind of love."[59] Daniel Crevier writes "Moravec's point is that emotions are just devices for channeling behavior in a direction beneficial to the survival of one's species."[60]
The question of whether the machine actually feels an emotion, or whether it merely acts as if feeling an emotion is the philosophical question, "can a machine be conscious?" in another form.[38]
Good.I don't disagree with any of that.
Again, good.Again, I don't disagree with that.
No, I didn't.Did you try to follow the argument between piggy and yy2bggggs about symbols and our perception of our big toe?
If you say so.One of the most painful things I have witnessed in a long time.
Yes but don't confuse a demonstration/consequence with a proof. How can a computer/algorithm come up with new proofs? You can't have an algorithm for coming up with a proof, only one that provides a demonstration or shows a consequence.
Maths is a broad church. Computers are good (much better than us) for calculating. They're not good at coming up with new proofs.
Yes but don't confuse a demonstration/consequence with a proof. How can a computer/algorithm come up with new proofs? You can't have an algorithm for coming up with a proof, only one that provides a demonstration or shows a consequence.
Maths is a broad church. Computers are good (much better than us) for calculating. They're not good at coming up with new proofs.
The use of computers in mathematics has turned out to be extremely limited. The kind of reasoning that mathematicians do is not especially helped by computerisation.
Not entirely correct.
The level of reasoning that mathematicians are capable of has not been "computerized" yet.
However a computer can certainly find you a route from point A to point B faster than any person, and if you have evidence that mathematical reasoning is somehow a different process than routing I would love to hear it. So I doubt you will be able to make the same statement in 50 years.
Half correct.
There may be a way to determine that we are brains in vats, at least as far as we can determine anything else about our reality, and certainly a technician running the vat lab could try to convince us by just communicating like God from the heavens or whatever, or even hooking up our perception to a camera in the vat lab and saying "see this is you, just a brain."
But there is no way to confirm that we are not brains in vats.
As a corollary, if we figured out that we were indeed brains in vats, we could not confirm that the vat lab was not also just the imagination of another brain in a vat, and so on and so forth.
In terms of simulations, even if you could "escape" to the external reality, there is no way to be sure that you are at the very top level I.E. the "true" reality.
Note that this doesn't change anything with regard to what we know of reality, it is just philosophical musing.
Nonsense.
I can write an algorithm that is capable of generating the same type of proofs that high school geometry students come up with on tests. I can write it right now. And it would generate proofs just as good as most of those high school students.
I say that because I know for a fact that the sequence of steps I would go through in those classes was algorithmic -- I had an established set of truths that I already knew, I had a goal, and I tried to figure out how to reach that goal using a sequence of those established truths, possibly generating new intermediate truths as I went.
Not entirely correct.
The level of reasoning that mathematicians are capable of has not been "computerized" yet.
However a computer can certainly find you a route from point A to point B faster than any person, and if you have evidence that mathematical reasoning is somehow a different process than routing I would love to hear it. So I doubt you will be able to make the same statement in 50 years.
The use of computers in mathematics has turned out to be extremely limited. The kind of reasoning that mathematicians do is not especially helped by computerisation.
At least no one has yet proposed that. I'm waiting for it though; come on, Pixy.That's because you are using the term 'proof' incorrectly. The proof is in deciding how to generate the algorithm. The computer doesn't decide how to generate the algorithm. A computer could not invent a proof for the existence of infinite primes, for example. It's not an autopoietic, evolving system.
You don't seem to get it do you.
Maths was invented by humans.
Talk about believing in magic beans.
That's because you are using the term 'proof' incorrectly. The proof is in deciding how to generate the algorithm. The computer doesn't decide how to generate the algorithm. A computer could not invent a proof for the existence of infinite primes, for example. It's not an autopoietic, evolving system.