AkuManiMani
Illuminator
- Joined
- Jan 19, 2008
- Messages
- 3,089
That is easy. Start with a system that experiences colors, then swap the source of visual input with something that encodes auditory information. You could do it to a human, if you had a lot of money and were in certain countries.
Synesthesia demonstrates that sensory input can be experienced in many different ways. The central question here is not so much how sensory input is operationally processed [though such questions are interesting] but how and why they are experienced at all. It is perfectly possible for sensory input to be processed without any conscious experience of it whatsoever -- even if that input is utilized to trigger physical responses.
I don't know of any. If you find one, let me know, because "emotion" is easily the most difficult issue when it comes to detailing human consciousness. I am particularly interested in how suffering and happiness arise.
However, emotion is still just a detail. Otherwise, exhibiting emotion would be a requirement for consciousness, and it isn't. At least, you haven't said so yet.
'Emotion' is another kind of subjective experience. Vertigo and nausea are as much physical sensations as they are emotional responses. Yes, in themselves, they are just examples -- a 'detail'. Emotions, and all other 'qualia', are ontologically in the same boat; they all constitute conscious experience. The question is why and how do they arise at all?
As wide as the range of things that can be computed.
Computation, IAOI, is not experience.
The kinds of computation that give rise to each. A tautology, but then subjective experience is nothing but a tautology. System X experiences being system X because it is system X. What else could it be like to be system X?
So, what if "system X" were a rock? It's composition and interactions with other objects are, fundamentally, computational in nature. Would you argue that it subjectively experiences being kicked?
What you experience is generated by reasoning, which means using existing facts about the world to infer new facts. Neural networks are implicit reasoning machines -- facts come in, new facts go out. Your brain is made of neural networks. If you disagree with any of this, feel free to enumerate the types of thought you are capable of that cannot be described in terms of reasoning -- including your precious "qualia."
'QUALIA'...ARE...EXPERIENCES. ANY experience. EVERY experience.
Logical operations and computations go on regardless of whether or not there is any actual experience. Reasoning is a computational process that goes on within the context of conscious experience. There is a distinct qualitative difference between reflexive, unconscious computation, and carrying out those operations consciously. What has yet to be done is to provide a sufficient operational definition of actual conscious experience.
I don't know the details, because I am not a bat and I haven't looked at a bat's brain code in a debugger.
However, I can confidently say that the bat experiences echo location the same way you experience anything without being conscious of the experience.
What did your toe feel like when you were driving home from work the other day? Don't remember? Does that mean there was no sensory input from your toe? Or does it mean you experienced sensory input but didn't reason about it and thus weren't actively conscious of it?
[...]
To experience things like a bat you would have to be a bat. To do so means you would no longer be a human, you would be a bat. Which means you would not be a completely different species, you would be the same species.
We don't know what it is about being a bat, human, cricket, or what-have-you, that creates the particular subjective quality of one's experiences. That is one of the central questions of the EMA. The operational descriptions of general logical functions are pretty well understood -- the actual experience [the 'whys', the 'hows', and the 'what exactlys'] that are still unknown.
Yes they are working on a rat brain I think:
http://www.guardian.co.uk/technology/2007/dec/20/research.it
As Pixy has said, simulated biological neural networks aren't very useful right now because the kinds of needs we have for AI right now is best served by very deterministic systems that we understand fully and have completely predictable behavior. That is to say, behavior that a human can sit down and predict in a debugger just by looking at some numbers.
This is slowly changing though.
Yea. It definitely seems like there wouldn't be much impetus for those kinds of projects unless it cloud lead to some direct commercial use. If there is a trend to change this I sure hope it continues
Last edited: