• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Automatons

Calling a human a "cell-based automaton" is no more valid than calling him an atom-based automaton. The stuff that we call mind or consciousness emerges only at higher levels of organization (higher than the cell, certainly).


I thought he was referring to cellular automation, such as the Game Of Life, which despite it's incredibly simple "laws" of physics, is capable of incredibly complex processes, and is even (theoretically) capable of universal computation (i.e. can be used to emulate a computer, or anything a computer could possibly simulate).
 
I thought he was referring to cellular automation, such as the Game Of Life, which despite it's incredibly simple "laws" of physics, is capable of incredibly complex processes, and is even (theoretically) capable of universal computation (i.e. can be used to emulate a computer, or anything a computer could possibly simulate).
That's what I said earlier, but it would be an awful stretch to think humans are cellular automata.

I'm pretty sure he was talking about biological cells. He asked something like, if humans don't have souls, are we then nothing more than cell-based automata?

I don't think he's standing behind that*, but to me it sounds an awful lot like some of the straw man arguments I've heard used by believers in dualism (of one form or another) to put down materialism.

By the way, calling cellular automata "incredibly complex" might be fine in the context of an unprogrammed graphics (or mathematical models, or whatever), but they are certainly not on par with the incredible complexity of the human mind, right?

*ETA: Actually I think maybe he only meant either humans have a soul or they are merely biological automata--which really isn't the point I've been criticizing. If that's the case, it sounds like maybe he's saying a soul is necessary for free will.
 
Last edited:
By the way, calling cellular automata "incredibly complex" might be fine in the context of an unprogrammed graphics (or mathematical models, or whatever), but they are certainly not on par with the incredible complexity of the human mind, right?
Surely automata can be as complex as you want. They can be as complex as a human mind or vastly more complex.
 

I'm pretty sure it'll be impossible to cite anyone saying that conceptually there's no limit to how complex we can could possibly make an automaton.

ETA: That doesn't make it false; there's no reason to suspect that we can't keep adding complexity.
 
Last edited:
Surely automata can be as complex as you want. They can be as complex as a human mind or vastly more complex.
Maybe so, but I specifically said "cellular automata". These "cells" aren't something that is analogous to a biological cell. They're just squares (or pixels or whatever) in a grid. The value of a given cell is determined by applying relatively simple rules based on the values of neighboring cells.

They are "incredibly complex" in a very limited sphere. (I'd say a better term is "surprisingly complex".) They are nowhere near as complex as the human mind. Which is why I correctly figured out that that's not what the OP meant by "cell-based" automata.
 
So you are saying at some level of complexity an automaton becomes something more than an automaton?
No. It was a remark specifically made about cellular automata.

Several of you seem to have missed, ignored or completely misunderstood the word "cellular" in the term "cellular automata" (and the context of my reply).
 
I'm pretty sure it'll be impossible to cite anyone saying that conceptually there's no limit to how complex we can could possibly make an automaton.

ETA: That doesn't make it false; there's no reason to suspect that we can't keep adding complexity.
You think you can make a "cellular automaton" as complex as a human mind? Or maybe you also missed the term.

Please read Brian-M's post and the entirety of my reply to it. You seem to think that I said automata cannot be made as complex as a human mind. I did not.

We were trying to parse the OP's mention of "cell-based" automata. I thought it might be an attempt to use "cellular automata" but dismissed the idea since it was a bad fit to what he said. I then took it for a straw-man (or reduced to absurdity) version of materialism. He conceded that "cell-based" was arbitrary, which leads me to believe what he really meant was merely "biological automata".

If that's the case, as I mentioned above, it sounds like he's saying either the soul exists or there is no such thing as free will. Otherwise, I'm not sure what he means by "automaton". (I guess he could be saying the mind and consciousness itself doesn't exist without a soul, but that's a pretty flimsy argument.)
 
I was under the impression that "cell-based" was interchangeable with "biological".
Yes. Me too.

The comment of mine referred to in this "complexity" discussion (my reply to Brian) used "cellular automata" which is NOT interchangeable with "cell-based" or "biological".
 
Last edited:
Well, "cellular automation" could fit as an analogy. (You could think of our universe itself as a cellular automation with a very complex set of rules.)

If you define an automation as something capable of autonomous action, but lacking free will, the real question is what do you mean by free will?

Free from what, exactly? The laws of cause and effect? External influence? In that's the case, then nobody has free will.

My personal concept of free-will has the following requirements...

1. A crude understanding of how your environment/world behaves.
2. A some idea of what actions you are capable of, and the likely results of making these actions.
3. Desires. (Whether Innate/hereditary, environmentally influenced, or intellectually determined.)
4. Capacity to choose our actions based on achieving our desires.

No intangible soul required.


By the way, calling cellular automata "incredibly complex" might be fine in the context of an unprogrammed graphics (or mathematical models, or whatever), but they are certainly not on par with the incredible complexity of the human mind, right?


Wikipedia said:
From a theoretical point of view, it is interesting because it has the power of a universal Turing machine: that is, anything that can be computed algorithmically can be computed within Conway's Game of Life.


If the human mind could be (theoretically) simulated by a computer, then yes, even the simplest cellular automation could be that complex. Hell, with the power of a universal Turing machine, it could emulate a computer powerful enough to simulate the entire universe, everything and and everyone in it included. (Of course, you'd have to build a computer powerful enough to run a cellular automation large enough to do this first.) :)
 

Attachments

  • Game_of_life_animated_LWSS.gif
    Game_of_life_animated_LWSS.gif
    20.6 KB · Views: 103
  • Gospers_glider_gun.gif
    Gospers_glider_gun.gif
    20.8 KB · Views: 103
Yes. Me too.

The comment of mine referred to in this "complexity" discussion (my reply to Brian) used "cellular automata" which is NOT interchangeable with "cell-based" or "biological".

In general, no.

In this context, I think they might be, because cellular automata can be turing complete. In theory, one should be able to create one that questioned its own purpose just like we do.

ETA arggh brian M beat me to it!
 

I'm pretty sure it'll be impossible to cite anyone saying that there's no limit to how complex we can could possibly make an automaton, speaking purely on a conceptual basis; there are rather obvious technological difficulties at the moment.

I'm not sure if I can rephrase that on the little sleep I'm going on and have it remain coherent, so if that didn't do it I'll get around to it later today.
 
Maybe so, but I specifically said "cellular automata". These "cells" aren't something that is analogous to a biological cell.
Well duh.
They're just squares (or pixels or whatever) in a grid. The value of a given cell is determined by applying relatively simple rules based on the values of neighboring cells.

They are "incredibly complex" in a very limited sphere. (I'd say a better term is "surprisingly complex".) They are nowhere near as complex as the human mind.
Again, I know of no upper limit of complexity for cellular automata - the size of the grid is not bounded. You could have 101000 squares or more. Nobody knows what behaviour they might be capable of.

Don't forget the human mind also operates according to a set of simpler rules. Different kinds of rules, certainly, but that does not make it more complex by definition.
 
So you are saying at some level of complexity an automaton becomes something more than an automaton?

Well, at increased levels of complexity it can develop and defend beliefs about itself one of which could be that it is not an automaton. There are philosophical routes by which it could view its whole genesis in space and time in a new light and assert that the human understanding of how it came into being is actually limited.

It has also been remarked that certain human traits or phenomena, such as for example "understanding," cannot be accounted for by reference to more simple machines but can be accounted for in similar machines if more complexity is present.

Nick
 
Last edited:
Well, at increased levels of complexity it can develop and defend beliefs about itself one of which could be that it is not an automaton. There are philosophical routes by which it could view its whole genesis in space and time in a new light and assert that the human understanding of how it came into being is actually limited.

Also, it wouldn't be able to perceive or detect the individual cells. It would only be able to detect the stable patterns, such as 'gliders' as some kind of fundamental particles, and speculate about their strange behavior and interactions by using abstract mathematics.

Exactly the way we do with quantum physics.
 
Hi
Robin said:
Surely automata can be as complex as you want. They can be as complex as a human mind or vastly more complex.
Cite?
Take any definition of any automaton and you will find that there is no limit imposed on the set of states and alphabets.

Take for example "Discrete Mathematics - Richard Johnsonbaugh" for a finite state machine:

(a) A finite set I of input symbols
(b) A finite set O of output symbols
(c) A finite set S of states
(d) A next-state function f from S x I into S
(e) An output function g from S x I into O
(f) An initial state δ subset of S

there is nothing to limit the size of I, O or S. See also Chaitin's definition of Omega:
NumberedEquation1.gif

(Gregory Chaitin - Meta Maths!)

Which suggests the set of all possible automata (even of a given type) is infinite.

So an automaton can be as complex as you want. And as long as the human mind is finitely complex then it could be as complex as the human mind.

Now it could be said that the human mind is infinitely complex in terms of the number of possible states. In that case an analog probabilistic automaton could also be infinitely complex in this sense.

That is not to say that the behaviour of a human mind could necessarily be reproduced by an automaton.

(Note - cellular automata are a subset of automata).
 

Back
Top Bottom