And I don't understand why you could not return to discuss them after you've published them and we finally have something to discuss.
That seems highly unlikely.
It does indeed seem highly unlikely that you will publish your results and give us something to discuss.
These are the types of questions that concern me.
Okay, I'll play your game.
Is intelligence an advantageous trait?
At times it may be.
Stupidity may also be advantageous, at times. If it weren't, why do we see so much of it?
If it is then why isn't everything intelligent?
When
bacteria rule the world, it makes more sense to ask why isn't everything stupid.
Why were there no dinosaur civilizations?
A lot of dictionaries define "civilization" as something like "
The stage of human social development and organization which is considered most advanced." That, by definition, rules out dinosaur civilizations.
That same dictionary goes on to give this example: ‘the Victorians equated the railways with progress and civilization’. For all we know, some of the dinosaurs equated their nests and trails with progress and civilization. Who are we to argue?
Where do abstractions come from?
Depends on the abstraction. Your question may have been too abstract.
What gives abstractions value?
I've been told that the value of something comes from individuals' willingness to exchange or pay for it, but this is the wrong subforum for extended discussion of that.
How do humans understand things?
Poorly.
Why can't computers understand things?
My computer understands many sequences of bits a lot faster and more reliably than I do.
Could humans be super smart someday?
It might depend on the human.
Could we encounter aliens vastly smarter than us?
Sure. Whether we
should encounter them is often discussed in the US Politics subforum.
Could a computer suddenly become conscious?
Could a network become conscious?
That might depend on what you mean by "conscious". In the past, I have asked you what you meant by that word, but I never understood your answer. As I recall, that discussion did not turn out well.
How much processing power would be required to duplicate the human mind?
How much memory?
Some people have tried to answer those questions, but I don't think we know enough about the human mind to answer those questions at this time.
Why do people have emotions?
Partly because their brains, including identifiable regions of their brains, have evolved to create emotions. We believe emotions are evolutionarily advantageous because some people who are abnormally unemotional suffer from it and may be classified as mentally ill.
Can you build a box with a human personality like in the movie, "Her"?
I have enough trouble just building a box. Besides, I haven't seen the movie. Do you recommend it? Why? (No spoilers, please.)
How does the brain overcome the frame problem?
Ah, the frame problem. I vaguely recall Marvin Minsky going on about that in an AI course I took long ago.
My personal opinion is that the frame problem is primarily relevant to expert systems and applied artificial intelligence. Brainful creatures generally avoid resort to logic until it becomes obvious that habit, training, prejudice, emotion and other preferred tools aren't getting the job done. Even then, most people have trouble using logic.
(I'm generalizing from my students, of course, but they've been selected in part for their ability to employ logic, and they've taken courses that are supposed to teach them logic.)
How does the brain solve the binding problem?
I believe most of the experimental research has involved vision, but I don't feel qualified to discuss that research in detail.
Can we build smart and loyal robots?
We've built cell phones, and those cell phones have created loyal owners. If some of those owners are smart, we can classify them as robots and declare victory here.
Could singularity happen?
To a mathematician, that's like asking "Could zero happen?"
If you're talking about what the popular press sometimes refers to as an AI singularity, that's a concept whose value lies primarily in the revenue it generates for the popular press.
Could you upload your mind to a computer?
I doubt it, but I've never tried.
Huh? Oh. It might be related to attention and comprehension.
Is it the same as consciousness?
That might depend on what you mean by "consciousness", in which case the question will remain unanswerable until you explain what you mean by "consciousness".
How does problem solving work in the brain?
For the most part, brains solve problems similar to problems they have solved before, mostly by employing tactics that have worked before.
Most people who ask that question in the context of AI are asking about very artificial sorts of problems. Any good answer to that question will start by noting that those are the sorts of problems people seldom have to solve, often fail to solve, and normally have trouble with even when they do eventually arrive at a solution.
Is Hofstadter's feedback loop model correct?
It's been years since I last talked to him, so I am not familiar with the current details. In general, though, the correct answer to questions of the form "Is model X correct" is "Not entirely."
Is Integrated Information Theory correct?
Is Dennett's Multiple Drafts model correct?
See above.
What about the Chinese room?
What about Mary's room?
Mary's is not the room that interests me.
I am aware that some philosophers delight in finding clever ways to define problems in such a way that the problem, by definition, has no solution, and then use an obfuscated form of equivocation to get people to think they've made some kind of profound contribution to the theory of knowledge or intelligence.