I was thinking how much the brain was more like a dynamic Programmable Logic Array
WP than a standard sequential computer. Of course, sequential computing can emulate a PLA, but one as massively detailed as the human brain would be inefficient to implement that way because of our brain's colossal Parallel computing
WP.
e.g. Blue Gene
WP
The creative brain is conscious; not logical. It's non computational at core, which is why you can not predict a persons behavior, even if you can model a groups behavior and actions based on averages and statistics.
Why are you even bringing up AI and computational models in a thread about consciousness? Strong AI has no place in science.
If Stong AI ever produced software that was about as intelligent as we are then it should be able to reprogram and upgrade itself, leading to
Recursive Self Improvement, nearly exponentially so.
Hyper intelligent software may not necessarily decide to support the continued existence of mankind, and would be extremely difficult to stop and shows risk to civilization, humans, and planet Earth. Frindely AI models are fundamentally against the laws of natural selection, and so is unlikely to be successful.
Berglas, Anthony (2008),
Artificial Intelligence will Kill our Grandchildren
Abstract
There have been many exaggerated claims as to the power of Artificial Intelligence (AI), but there has also been real progress. Computers can drive cars across rough desert tracks, understand speech, and prove complex mathematical theorems.
It is difficult to predict future progress, but if a computer ever became about as good at programming computers as people are, then it could program a copy of itself. This would lead to an exponential rise in intelligence (now often referred to as the Singularity). And evolution suggests that a sufficiently powerful AI would probably destroy humanity.
This paper reviews technical progress in Artificial Intelligence and some philosophical issues and objections. It then describes the danger and proposes a radical solution, namely to limit the production of ever more powerful computers and so try to starve any AI of processing power. This is urgent, as computers are already almost powerful enough to host an artificial intelligence.
The problem is that the computers will never be conscious like we are, even if they are more intelligent. They will have no sympathy, empathy or care for our needs.
In terms of computational theory an AI researcher need not be a computationalist, because they might believe that computers can do things brains do noncomputationally. Perhaps calling computationalism a theory is not exactly right here. I think "dogma" "working hypthesis" or "working assumption" is more suitable. The evidence for computationalism is not overwhelming, and some even believe it has been refuted, by
a priori arguments or empirical evidence.
Getting back to your previous comment about the Turing test for consciousness, Turing’s test is not necessarily relevant to the computational theory of consciousness. It doesn’t particularly help develop theoretical proposals, and it gets in the way of thinking about intelligent systems that obviously can’t pass the test. Somewhere in this thicket of possibilities there might be an artificial intelligence with an alien form of consciousness that could pretend to be conscious on our terms while knowing full well that it wasn’t. It could then pass the Turing Test by faking it. All this shows is that there is a slight possibility that the Turing Test could be good at detecting intelligence and not so good at detecting consciousness.