• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Automatons

Hi

Take any definition of any automaton and you will find that there is no limit imposed on the set of states and alphabets.

... clip ...


I stand corrected.

...since I can construct an infinite number of sentences (infinite complexity) using standard English grammar and only the words, "dogs," "bark," and, "and," (noun, verb, conjunction), right?

I misunderstood the comment. Sorry.
 
There is no soul and no afterlife.

I know it is hard to accept this when you are young, heck, its even difficult for some people when they are old.

But this is how it is, or rather, there is no reliable evidence that this isn't obviously the case.

If it is any help, I was terrified when I realized this in my youth, just as you perhaps are terrified. But I'm OK about it now. I suspect you will´be too in a decade.

You're attributing a belief to me that I never said I have. There seems to be some confusion here in that some folks believe I'm arguing that we must have souls to have value. There is no evidence for the soul, and I wasn't arguing in favor of one. I just wanted to debate why we choose to go on living as rational beings who are capable of observing our existence as a physical process in a physical universe that doesn't care what we do.

Hi

I've got a question for you guys:

Manufactured (in the original sense) machines, as long as they don't consume some essential part of themselves in operation, such as thermal batteries and pressure accumulators, can be turned back on once they've been turned off.

If we turn off a living organism, we can't turn it back on.

Do you think this is a matter of resources consumed (food and oxygen, for instance), or is it a matter of the complexity of the machine and, "turned off," being too widely distributed, or some combination of both?

(Please note that I do not ask about a, "soul," about which my beliefs seem to be at odds with both mainstream Christians and atheists, both. I'm interested in the shared concept of the nature of a, "machine," here.)

Others gave more complicated answers, but the simple truth is that if we continued to live after death, overpopulation would strike the planet, and we would die from lack of resources. Thus, life evolved to endure for a period of time before making room for the next life.
 
Well, at increased levels of complexity it can develop and defend beliefs about itself one of which could be that it is not an automaton. There are philosophical routes by which it could view its whole genesis in space and time in a new light and assert that the human understanding of how it came into being is actually limited.

It has also been remarked that certain human traits or phenomena, such as for example "understanding," cannot be accounted for by reference to more simple machines but can be accounted for in similar machines if more complexity is present.

Nick

That doesn't address my question.

If an automaton is so complex that it develops and defends beliefs about itself, one of which is that it is not an automaton, is it no longer an automaton?

That is like saying once a curve features more than X direction changes it is no longer a curve or once a polygon features more than X vertices it is no longer a polygon. Clearly a stupid thing to say given the definition of a curve and a polygon.

Yet here we have people saying that humans are not automatons because -- even though we satisfy all the criteria in the definition of automaton -- we are "more complex" than a "real" automaton. What?
 
The term "cell-based automaton" sounds like a straw man version of materialism.

I reject the notion of a soul (due to lack of evidence), but I do not think that my mind is equivalent to a collection of any microscopic units of matter.

Again, the atoms are organized mostly into molecules which are mostly organized into cells, which are organized into tissues which are organized into other structures, which are organized into organs. At every level of organization there are emergent properties.

But if your brain is constructed of physical matter, then are you not an automaton going through the motions? What would be the difference between a human brain and an artificially constructed brain? Perceiving your own thoughts and qualia is a function of the brain, probably a result of our evolved ability to empathize with others which allows us to empathize with ourselves. This self-awareness could be emulated in a complex artificial brain given the technology to build it, which begs the question of when emulation stops being emulation and starts being what we already are.

Calling a human a "cell-based automaton" is no more valid than calling him an atom-based automaton. The stuff that we call mind or consciousness emerges only at higher levels of organization (higher than the cell, certainly).

Atoms, cells--choose any unit of measurement you wish. I used "cell-based automaton" just to reference the fact that we're made of units of matter arranged to behave as a functioning system, like a robot.

Here's a thought experiment for fun. Imagine Willy Wonka's teleporter was real--it took apart our atoms, shot them through a tube, and reconstructed them in another room. Would that person be alive even though they were technically "dead" as they shot through the tube as bits of matter? What is the difference between making a human being out of atoms and making a robot out of atoms? Does that mean the robot is just as alive as we are, or that we are as automatic as the robot?
 
Last edited:
That is not to say that the behaviour of a human mind could necessarily be reproduced by an automaton.

If there is some behaviour of a human mind which an automaton cannot reproduce, doesn't that suggest that human minds are more complex than any automaton?
 
If there is some behaviour of a human mind which an automaton cannot reproduce, doesn't that suggest that human minds are more complex than any automaton?

To me, it only suggests that the era in which the automaton was constructed didn't yet have the technology to reproduce the behavior of the human mind, but it would only be a matter of time as hardware and software would continue to progress.
 
To me, it only suggests that the era in which the automaton was constructed didn't yet have the technology to reproduce the behavior of the human mind, but it would only be a matter of time as hardware and software would continue to progress.

Not in the context of the rest of Robin's post, which relates to the set of all possible automata, not those of a particular era.
 
You are, what you are. You can't go back into the past (ie: origin of life) and change that. You can, of course, make up all kinds of stories to fool yourself (and others) into believing you're something more. I'd rather believe what I can determine is the truth. Reality as it is, not how I want it to be.

That's my decision.
 
That doesn't address my question.

If an automaton is so complex that it develops and defends beliefs about itself, one of which is that it is not an automaton, is it no longer an automaton?

Well, like I said, with sufficient complexity it could perhaps develop such skills that it could succeed in convincing its creators that actually its origins were not truly in the manner in which they had interpreted them to be. I admit things are getting a bit contrived here! I don't know the precise definition of an automaton, or indeed a human (the relevant definition here).

RD said:
That is like saying once a curve features more than X direction changes it is no longer a curve or once a polygon features more than X vertices it is no longer a polygon. Clearly a stupid thing to say given the definition of a curve and a polygon.

Undoubtedbly the case but with more complex machines things can change.

RD said:
Yet here we have people saying that humans are not automatons because -- even though we satisfy all the criteria in the definition of automaton -- we are "more complex" than a "real" automaton. What?

Yes, I think they're talking nonsense. I was just discussing the point you raised a bit really.

Nick
 
But if your brain is constructed of physical matter, then are you not an automaton going through the motions?

If this automaton has all the traits of a human, meaning both behaviourally and in terms of its inner world, then I'd say they're the same. I think we're still some way from creating an automaton that can actually feel.

Nick
 
If there is some behaviour of a human mind which an automaton cannot reproduce, doesn't that suggest that human minds are more complex than any automaton?

If complexity is defined in that way, then yes.

But there is no reason to think that there is any behavior of a human mind that an automaton cannot reproduce.
 
If there is some behaviour of a human mind which an automaton cannot reproduce, doesn't that suggest that human minds are more complex than any automaton?
No, it would suggest that some things are not automata.
 
Hi

If complexity is defined in that way, then yes.

But there is no reason to think that there is any behavior of a human mind that an automaton cannot reproduce.


It's not a matter of reproducing a behavior. It's a matter of inventing a new behavior.

Rube Goldberg machines could open bottles and make breakfasts, and were incredibly complex, but all they did was open bottles and make breakfasts. My 'Dogs Bark And' example, above, is capable of infinite complexity, but it doesn't really DO anything but convey that dogs bark.

Complexity in itself doesn't necessarily infer function.

I'm still an autonomaton.
 
Last edited:
I'm still an autonomaton.

And yet, you:

1) are composed entirely of parts which we know to be automata

2) cannot demonstrate any behavior which couldn't be demonstrated by automata

So what you are really saying is an autonomaton is a type of automaton. In particular, the type that thinks it is an autonomaton.
 
But if your brain is constructed of physical matter, then are you not an automaton going through the motions?
But what does that mean?

Again, it sounds like what you're after is something like a question of free-will or perhaps subjective consciousness. How do you measure whatever this is you're talking about?

What would be the difference between a human brain and an artificially constructed brain? Perceiving your own thoughts and qualia is a function of the brain, probably a result of our evolved ability to empathize with others which allows us to empathize with ourselves. This self-awareness could be emulated in a complex artificial brain given the technology to build it, which begs the question of when emulation stops being emulation and starts being what we already are.
Yes--this is the philosophical "zombie" question. I think the problem is that whatever one thinks might be added by "the soul" is undefined, so the question is really undefined.

I agree the mind is wholly a function of the material. I see no problem with the possibility that a computer or something could be made that has enough layers of complexity that whatever that thing is--subjective consciousness, free-will or whatever it is that is lacking when an "automaton" is "just going through the motions"--could arise.

Atoms, cells--choose any unit of measurement you wish. I used "cell-based automaton" just to reference the fact that we're made of units of matter arranged to behave as a functioning system, like a robot.
And this is the part that to me sounds like a straw man.

It doesn't matter that all regular matter is based on atoms when you're talking about the brain. It also doesn't matter that all life is based on cells when you're talking about the brain. The levels of complexity that are of interest to the question you're positing start much higher. Why not say a "brain-based automaton"?

So if you're building an artificial intelligence (or "automaton"--which really isn't the same thing), for purposes of a Turing test, it doesn't matter if it's silicon or pixels or whatever. What matters is a much higher level of organization.

Here's a thought experiment for fun. Imagine Willy Wonka's teleporter was real--it took apart our atoms, shot them through a tube, and reconstructed them in another room. Would that person be alive even though they were technically "dead" as they shot through the tube as bits of matter? What is the difference between making a human being out of atoms and making a robot out of atoms? Does that mean the robot is just as alive as we are, or that we are as automatic as the robot?
Yes--you're asking the zombie question. (Except you're now getting distracted with "alive" and "dead" which really isn't what you're after at all, is it?)

So here's my response to your question: until you define what's missing without a soul (or what's present with a soul), I don't think anything is missing.

That is, I don't think the soul exists until you provide some positive proof for its existence. These thought experiments and hypothetical zombies (what you're calling automata) do not in any way at all add any evidence pointing to the existence of a soul.
 
If you define an automation as something capable of autonomous action, but lacking free will, the real question is what do you mean by free will?
Exactly. I've seen this same discussion many times before, but substitute "zombie" for automaton. Or if the question isn't specifically "free will" maybe it's "subjective consciousness".

At any rate, it looks like an attempt to argue in favor of dualism even though there is no evidence to support that position.


If the human mind could be (theoretically) simulated by a computer, then yes, even the simplest cellular automation could be that complex. Hell, with the power of a universal Turing machine, it could emulate a computer powerful enough to simulate the entire universe, everything and and everyone in it included. (Of course, you'd have to build a computer powerful enough to run a cellular automation large enough to do this first.) :)
Sorry to have gotten on this derail. I was just making the comment that the current capabilities of cellular automata might be described as "incredibly complex" (given the relatively simple rules they're based on), but that currently isn't in the same ballpark as the complexity of the brain. Then, people started ignoring the context (and particular that I was talking about a comment about the current capability of cellular automata only) and taking my comment to say that no automaton (really AI) could ever be made that rivals the complexity of the human brain.

I never said that. I don't believe that.

As you point out here, though, if I had a computer powerful enough to run that sort of cellular automaton, I suspect there would be better ways of making an AI.
 
You're attributing a belief to me that I never said I have. There seems to be some confusion here in that some folks believe I'm arguing that we must have souls to have value. There is no evidence for the soul, and I wasn't arguing in favor of one. I just wanted to debate why we choose to go on living as rational beings who are capable of observing our existence as a physical process in a physical universe that doesn't care what we do.
Sorry, I misunderstood you.

The reason we are able to go living in a Universe who doesn't care about us or anybody else, is precisely because we know that this is the only round we get.

There is no after-life, there is no before-life, there is only the life you now have. It will last probably 80 years. That's it.

So live your 80 years to the fullest. I do.
 
Yes--you're asking the zombie question. (Except you're now getting distracted with "alive" and "dead" which really isn't what you're after at all, is it?)

So here's my response to your question: until you define what's missing without a soul (or what's present with a soul), I don't think anything is missing.

That is, I don't think the soul exists until you provide some positive proof for its existence. These thought experiments and hypothetical zombies (what you're calling automata) do not in any way at all add any evidence pointing to the existence of a soul.

I'm not arguing that there's a soul. I've been asking everyone how they grapple with the fact that they're soulless, physical objects. The discussion deviated from that, which is probably my fault for using terms like "automatons" to frame it.
 
I've been asking everyone how they grapple with the fact that they're soulless, physical objects.
In post #18, after at least a dozen simple, straightforward replies, you were "struggling with the thought of non-existence and non-purpose"

Now that you have had quite a bit of elaboration on those answers, do you still think the word 'grapple' is appropriate?
 

Back
Top Bottom