Granted, we don't currently define emotional or motivational states well enough to operationalize them in computer systems, but is there any legitimate argument why we could not?
Yeah, I don't see why not. Reminds me of neural network modeling in cognitive psych circles (more on this below). The more I read and learn, the more I find myself agreeing with Piggy's view of us as organic machines comprised of the Dennettian "tiny robots." I was extremely resistant to this idea in the past, that it somehow made us less "human" for rather silly, New Age reasons. But now I am in awe of human "machines," if you will. Highfalutin ones, but still machine-like in so many ways, though mind-blowingly more complex than non-neo-cortex sheathed brains (at least as far as we know).
Ever read Thagard (
Hot Thought-- dense, but excellent book)? He cites the example of the neurocomputational model GAGE (named after our pal Phineas), which actually models decision-making through units representing cognitions distributed across different brain areas (e.g., PFC, amygdala, hippocampus, etc.). He also discusses HOTCO (for HOT COherence), which models emotional inferences made in thinking and reasoning about different things.
This would also involve a lot of different strains of constraint satisfaction programming, which would weave a fantastically complex web. Just for a taste, human motives (e.g., to seek cognitive consistency, as in cognitive dissonance reduction) impose constraints on beliefs about and attitudes toward stimuli and qualia. Not to mention the undergirding Damasio autobiographical memory piece charged with limbic and prefrontal cortical momentum, which in itself is in a near constant state of subtle if not sometimes dramatic flux based upon impinging stimuli, the bundling/chunking of which are viewed as "experiences" that shape our brains and behaviors, which enriches meta-awareness.
Frankly, it gives me a headache thinking about the immensity of it.
Should we wish to have a computer that works or thinks like a human it would probably have to learn like a human -- have set up predispositions that can be altered over time by learning through interacting with the environment. I don't see why we could not include, with the proper understanding of how they work, both emotional and motivational systems within a robot that could end up doing things much like a human by learning semantic content as it interacts with the environment.
Yes, definitely. The predispositions fit in well with the constraint programming piece. But even more complex would be the reconfiguration of the very parameters that set the bounds of those other variables via interactions with the environment. The emotional, autobiographical/me stance, and what you refer to as motivational pretty much squash the possibility of a "dumb" algorithms of the iterative paramecium photoreceptor type.
That's possible, but the 'recreation' of neural function would probably occur at a higher level of programming, though this would ultimately result from 0,1, move left, move right.
Neural network modeling already does this to some degree, though I'm not sure about the binary code bit.
I'm not willing to guess either way, but theoretically it should be realizable on a general Turing machine. What I'm interested in is exploring the possibility. I don't see why anyone would want to argue that it is impossible. How are we supposed to know until we try. We've barely scratched the surface so far.
Agreed! Exciting stuff!
This is a difficult issue, but I think we are smart enough to figure it out. The quickest way to do it, though, would probably be to recreate cortical columns, as is already being done, along with subcortical structures -- and then let the system learn. We aren't smart enough to know how to program the system from scratch I don't think.
Would you mind providing a reference or two for this cortical column material? I'm genuinely interested but don't know much about what you are referring to.