• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

My take on why indeed the study of consciousness may not be as simple

Done already. We have no defined or hypothesized concept of a reasoning capacity greater than a Turing machine.

Which, in layman's terms, means there are no known decisions -- none -- that can be made that a Turing machine cannot also make.

The argument "well, no turing machine has ever decided which painting it prefers" is irrelevant, because it is mathematically provable that if such a decision can actually be made (and it can, because people make those kinds of decisions every day) then a Turing machine can indeed do it.

Enter Lucas-Penrose -- if you really want consciousness to be magical, then, you need to come up with decisions that humans make that are undecidable by a Turing machine. Both Lucas and Penrose tried (and they thought they found some) but it isn't called the Lucas-Penrose fallacy for nothing. There are simply no such decisions.
 
The way I look at it is that we have people researching both the supernatural and AI. No significant advancements into supernatural have ever been made. We make advancements in the field of AI every day.

There have been great advances in AI, but they aren't the ones that were anticipated back when LISP was emerging and there was a bright future ahead. Now AI is very good at producing firmware for washing machines, but it's still not possible for a computer to carry on a conversation.

I'm very agnostic about all possible solutions, but I expect AI to be a dead end and that a combination of biological research on the brain and physics will find an answer, if an answer can be found - which I regard as uncertain.
 
Done already. We have no defined or hypothesized concept of a reasoning capacity greater than a Turing machine.

There's no such concept as "reasoning capacity" in the definition of a Turing machine.

Information processing and problem solving.




Good thing I didn't then. I used a well-defined and relevant term of art, such as "powerful." If you don't understand the mathematics of Turing completeness, perhaps you shouldn't be attempting "subtle and complex arguments" that by your own admission you don't understand.

I don't fully understand the arguments, but I know an ill-defined and irrelevant term when I hear one. "Powerful" is a term for Dell's advertising agency.
 
Which, in layman's terms, means there are no known decisions -- none -- that can be made that a Turing machine cannot also make.

And there are no decisions - none - that can't be made with a single coin with recognisably different sides.

The coin also has the advantage that it can make different decisions on the same data, whereas the Turing machine will always give the same answer.
 
Last edited:
The argument "well, no turing machine has ever decided which painting it prefers" is irrelevant, because it is mathematically provable that if such a decision can actually be made (and it can, because people make those kinds of decisions every day) then a Turing machine can indeed do it.

I'd like to see the reference for that one. If it's mathematically provable, that has to be true.
 
And there are no decisions - none - that can't be made with a single coin with recognisably different sides.

The coin also has the advantage that it can make different decisions on the same data, whereas the Turing machine will always give the same answer.

Yet another post that clearly demonstrates your understanding of the relevant issues.
 
Yet another post that clearly demonstrates your understanding of the relevant issues.

I can generally tell how good the point was by how quickly Rocketdodger gets the personal dig in.

That it's possible to make decisions using anything with multiple states is pretty obvious. I suppose that RD will think I said that you can use a coin as a computer and the central issue will fly by.
 
rocketdodger said:
So it is possible that the instant I am (or you are) experiencing right now is nothing more than fleeting states of some random system in some random universe.
Methinks someone has read Permutation City. Are you the one who recommended it to me?

I would gladly accept a higher probability of solipsism being true if it means I might be able to exist free of this body.
It would get tiresome after awhile.

~~ Paul
 
Last edited:
That just said "see Recursion Theory".

Not really, but it might as well have done.

Naturally, the word "decision" doesn't occur in this randomly chosen article. I would have been astonished at this stage if RD had posted something actually relevant.
 
Last edited:
rocketdodger said:
Enter Lucas-Penrose -- if you really want consciousness to be magical, then, you need to come up with decisions that humans make that are undecidable by a Turing machine. Both Lucas and Penrose tried (and they thought they found some) but it isn't called the Lucas-Penrose fallacy for nothing. There are simply no such decisions.

Here's a paper on this issue:

http://www.mth.kcl.ac.uk/~llandau/Homepage/Math/penrose.html

Do you know of a better one?

~~ Paul
 
I have just a little problem with all of this.

People say they have no problem at all with the idea that a program being desk checked on pencil and paper over a billion years could result in a brief conscious moment just as we experience it - a note from Bird's saxaphone, the taste of a peach or some such.
Yes.

Not just that they think it possible, but that they don't even see how the idea might be problematical.
Yes.

That this instant you are experiencing right now could have resulted from people writing down numbers in little boxes on pieces of paper.
Yes.

Well I have got to wonder if they are really serious, or just maintaining a debating point.
Completely serious.

Can the taste of a peach really result from numbers being written down on paper with a pencil?
Not from the numbers being written down, but from the calculations being performed on those numbers.

Church-Turing thesis. The mechanism behind the computation is irrelevant.
 
But we have not mathematically modelled a working brain and observed animal like behaviour in the model yet, as far as I know.
We're well on the way. And it's irrelevant in any case. Either you are asserting that brains are magical, or we can model them.

So again, I will wait until we have before I decide that we can.
There is no rational reason to take that position.
 
And there are no decisions - none - that can't be made with a single coin with recognisably different sides.
Westprog, this statement is as wrong as it is possible to be and still form a syntactically valid sentence.

A coin cannot make decisions.

The coin also has the advantage that it can make different decisions on the same data, whereas the Turing machine will always give the same answer.
First, no, that is not true, and second, were it true, that would not be an advantage.
 

Back
Top Bottom