• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

My take on why indeed the study of consciousness may not be as simple

Not really, but it might as well have done.

Naturally, the word "decision" doesn't occur in this randomly chosen article. I would have been astonished at this stage if RD had posted something actually relevant.

From the article:

The main form of computability studied in recursion theory was introduced by Turing (1936). A set of natural numbers is said to be a computable set (also called a decidable, recursive, or Turing computable set) if there is a Turing machine that, given a number n, halts with output 1 if n is in the set and halts with output 0 if n is not in the set. A function f from the natural numbers to themselves is a recursive or (Turing) computable function if there is a Turing machine that, on input n, halts and returns output f(n). The use of Turing machines here is not necessary; there are many other models of computation that have the same computing power as Turing machines; for example the μ-recursive functions obtained from primitive recursion and the μ operator.

The terminology for recursive functions and sets is not completely standardized. The definition in terms of μ-recursive functions as well as a different definition of rekursiv functions by Gödel led to the traditional name recursive for sets and functions computable by a Turing machine. The word decidable stems from the German word Entscheidungsproblem which was used in the original papers of Turing and others. In contemporary use, the term "computable function" has various definitions: according to Cutland (1980), it is a partial recursive function (which can be undefined for some inputs), while according to Soare (1987) it is a total recursive (equivalently, general recursive) function. This article follows the second of these conventions. Soare (1996) gives additional comments about the terminology.

Not every set of natural numbers is computable. The halting problem, which is the set of (descriptions of) Turing machines that halt on input 0, is a well known example of a noncomputable set. The existence of many noncomputable sets follows from the facts that there are only countably many Turing machines, and thus only countably many computable sets, but there are uncountably many sets of natural numbers.

Although the Halting problem is not computable, it is possible to simulate program execution and produce an infinite list of the programs that do halt. Thus the halting problem is an example of a recursively enumerable set, which is a set that can be enumerated by a Turing machine (other terms for recursively enumerable include computably enumerable and semidecidable). Equivalently, a set is recursively enumerable if and only if it is the range of some computable function. The recursively enumerable sets, although not decidable in general, have been studied in detail in recursion theory.
 
Last edited:
Westprog, this statement is as wrong as it is possible to be and still form a syntactically valid sentence.

A coin cannot make decisions.

If a coin cannot make decisions, then neither can the Turing machine.

First, no, that is not true, and second, were it true, that would not be an advantage.

I think you'll find that a Turing machine is deterministic. I think you'll find that a coin toss isn't.

Of course, it's possible to run Turing machines with different data to obtain a different result.
 
rocketdodger said:
Lol no actually you recommended it to me, and I haven't been able to read it yet.
Now that's funny! My usually crappy memory playing an unusually funny trick. Are you sure you haven't read it? :D

Existing free of this body would get tiresome after awhile, just like every other way of living forever.

~~ Paul
 
Robin said:
That this instant you are experiencing right now could have resulted from people writing down numbers in little boxes on pieces of paper.
PixyMisa said:
I'm not sure your experience could result from paper consciousness. I think the paper-entity would know it was a paper-entity.

I suppose the emulators could also emulate a body and the external world for the paper-entity.

~~ Paul
 
westprog said:
I think you'll find that a Turing machine is deterministic. I think you'll find that a coin toss isn't.
But don't you think that a Turing machine can simulate a coin toss to an arbitrary degree of randomness?

~~ Paul
 
A very precise, mathematical definition of the word "decidable". Which has little to do with human beings making decisions.

lol.

Alright, lets play this game a different way.

Can you give me any examples of a decision that a human makes that does not satisfy this mathematical definition of decision?
 
I'm not sure your experience could result from paper consciousness. I think the paper-entity would know it was a paper-entity.
The paper entity (in this hypothetical) is a replay of my subjective experience, so it would think what I thought, feel what I felt.

I suppose the emulators could also emulate a body and the external world for the paper-entity.
Just the subjective experiences thereof. I mean, you could in principle simulate the Universe with a large enough piece of paper and an whole lot of pencils...
 
If a coin cannot make decisions, then neither can the Turing machine.
No, Westprog. You don't get to make up random nonsense just to suit your personal biases.

Particularly when you contradict yourself in the very next sentence:

I think you'll find that a Turing machine is deterministic. I think you'll find that a coin toss isn't.
What's that you say? Coin tosses and Turing machines are fundamentally different? I couldn't have put it better myself.

Of course, it's possible to run Turing machines with different data to obtain a different result.
Which does not apply to coin tossing, of course. No matter what the data is, the results follow an identical distribution.
 
Last edited:
Naturally, the word "decision" doesn't occur in this randomly chosen article. I would have been astonished at this stage if RD had posted something actually relevant.
And we'd be astonished if you actually bothered to read any of the articles we provide.
 
There have been great advances in AI, but they aren't the ones that were anticipated back when LISP was emerging and there was a bright future ahead. Now AI is very good at producing firmware for washing machines, but it's still not possible for a computer to carry on a conversation.
Human prediction is fallible. BTW: The Leobner prize is pushing us toward that very thing. The human brain is in a number of ways the most complex puzzle science has studied. It's going to be awhile. That it will take some time isn't proof of anything.

I'm very agnostic about all possible solutions, but I expect AI to be a dead end and that a combination of biological research on the brain and physics will find an answer, if an answer can be found - which I regard as uncertain.
That's cool but bear in mind you've provided no justification for your prediction and the experts in the field aren't scratching their heads and throwing in the towel. IBM has committed significant resources. Perhaps you know something they don't.
 
Sure. And what enables one to comprehend?
You are completely missing the point. I asked you for your definition of "understand" and you defined it in terms of "comprehend".

But if "comprehend" means "understand" then your definition is circular.

So I still don't know what you mean by "understand".
I don't understand the point of your question. If you are not conscious you won't understand anything.

Do you mean is it possible to be conscious of something and not understand it?
No, I mean are you ruling out that I can be conscious of something and not understand what I am conscious of.
What do you mean by "understand"?
Again you are missing the point. I am trying to establish what you mean by "understand".

You have already said that my definition is irrelevant to the argument. I just want to know what definition of "understand" is relevant to the argument.
 
I'm not sure your experience could result from paper consciousness. I think the paper-entity would know it was a paper-entity.

I suppose the emulators could also emulate a body and the external world for the paper-entity.

~~ Paul
That is a given of the argument. The program that is being desk checked is a neuron (or lower) level model of the human brain with all appropriate sense inputs modelled too.

So this brain would know nothing of the pencils and papers that are being used to desk check the algorithm.

So you would have to assert that your experience right at this moment could be the result of billions of numbers written on paper with pencils.

Pixy says yes, without question. Do you say yes, without question?
 
Last edited:
In any case, there's no "self-awareness" involved.
So you keep saying but I have no idea what you mean by "self aware"
There's a computer and there's a car. The computer controls the car. The car is no more self-aware than it is when it's controlled by a human being.
OK, let's go through this again.

Is the computer programmed car aware of it's environment?

Can it be aware of another car in it's environment?
 
Without a concept of self, you cannot be self aware, obviously. You're just aware.
So you are saying that X can only be aware of Y if X has a concept of Y. Yes?

But you are referring to a robotic car as being "aware" aren't you?

So does a robotic car have a concept of a tree?

Does a robotic car have a concept of another car on the road?
 
We're well on the way.
We won't have long to wait then.
And it's irrelevant in any case. Either you are asserting that brains are magical, or we can model them.
So your definition of "magic" is "unmodellable" is it?

In any case I didn't say it was unmodellable, I just wondered if a computer model of the brain would produce animal like behaviour in the modelled animal.
There is no rational reason to take that position.
So there is no rational reason to wait until the successful completion of an experiment before deciding if it will be successful?
 
Last edited:
We won't have long to wait then.
I expect to see significant results from simulations of animal brains within the next decade. We have the capacity to do the simulations already, but as far as I know this hasn't been applied to behaviour studies for advanced animals.

So your definition of "magic" is "unmodellable" is it?
Pretty much, yes.

In any case I didn't say it was unmodellable, I just wondered if a computer model of the brain would produce animal like behaviour in the modelled animal.
How can it not? Is it a model or not? You just said it was a model. If it's a model, then it produces a model of the behaviour of the system being modelled. That's the entire point of models. If it doesn't do that, it's not a model.

So there is no rational reason to wait until the successful completion of an experiment before deciding if it will be successful?
There is no rational reason to assert that a physical system cannot be modelled.
 
PixyMisa said:
Robin said:
So you are saying that X can only be aware of Y if X has a concept of Y. Yes?
Yes.
But you are referring to a robotic car as being "aware" aren't you?
Yes.
So does a robotic car have a concept of a tree?
Yes.
Does a robotic car have a concept of another car on the road?
Yes.
OK, so explain why you think it can have a concept of one car on the road and not of another car on the road.
 

Back
Top Bottom