Robot consciousness

Thanks for the links.

But if a version of CTM proves accurate, consciousness is still a very high-level function, and neural impulses (the building blocks of all high-level functions) are still ephemeral. Therefore, I don't see how we can get around the conclusion that there must be a floor for impulse speed below which consciousness is unsustainable.
If neurons are working to a set of rules that we can figure out, it should also be possible construct machine that can carry out these rules, be it in silicon or pencil/paper. Such a machine would not be bound by the biological/physical constraints on the brain, and therefore computation could be slowed down indefinitely. The machine would be conscious because it is directly equivalent to a conscious brain, but the slow speed of computation and the huge data involved would make it unlikely that we would ever recognise it as conscious.

Please notice that a CTM would not be necessary for such a machine to work, provided that every neuron of a real brain is included, and that every function of a real neuron is simulated.

However, in the real world, such a simulation is impossible, and a CTM is necessary for us to understand what is going on in a brain, and also in order to cut down the size of the simulation.
 
If neurons are working to a set of rules that we can figure out, it should also be possible construct machine that can carry out these rules, be it in silicon or pencil/paper. Such a machine would not be bound by the biological/physical constraints on the brain, and therefore computation could be slowed down indefinitely. The machine would be conscious because it is directly equivalent to a conscious brain, but the slow speed of computation and the huge data involved would make it unlikely that we would ever recognise it as conscious.

Well, regardless of an external observer's ability to recognize it as conscious, what would be this entity's experience?

However, in the real world, such a simulation is impossible

What other world are we talking about? :)
 
Well, regardless of an external observer's ability to recognize it as conscious, what would be this entity's experience?
How do you describe the experience of consciousness?

What other world are we talking about? :)
There is this theoretical world where billions of neurons can be simulated by pencil and paper ...
 
Well, regardless of an external observer's ability to recognize it as conscious, what would be this entity's experience?
Piggy, in my naive way, that is exactly what I'd like to know. I'll go back to lurking now.
 
Piggy said:
First, we cannot assume that this brain is some sort of TM. That would be jumping the gun.
Agreed, although I'm not sure what more it could do. Heck, even a nondeterministic Turing machine is no more powerful than a TM.

Instead, we must assume it works like a human brain, which may be some sort of TM equivalent, but maybe (I'd say most probably) not.

Well, obviously if we slow down the rate to zero, there's no consciousness. (That's why I kept bringing that up -- not because I thought you were arguing otherwise, but as one end of a continuum.)

Somewhere between natural brain-speed and 0, then, there's a point where consciousness is not sustainable. Is it at 0?

No, it can't be. It must be higher than 0. Why? Because from cases like Marvin's, and more recent research demonstrating that we act on stimuli before we are consciously aware of them, we see that consciousness is a specialized downstream process involving the coordination of highly processed information. Because the phenomenon of conscious awareness requires coordination of coherent aggregate data, and because neural impulses are ephemeral, there must be a point higher than 0 at which coherence is insufficient to maintain conscious awareness.
The impulses are ephemeral because the brain uses chemistry. But is this ephemeral nature required? Perhaps a less ephemeral substrate, such as electronics, could operate arbitrarily slowly.

Thanks for the interesting links.

~~ Paul
 
The impulses are ephemeral because the brain uses chemistry. But is this ephemeral nature required? Perhaps a less ephemeral substrate, such as electronics, could operate arbitrarily slowly.

I wonder if you could build a brain that used non-ephemeral tokens, so that signals worked something like objects on an assembly line, and when enough tokens had arrived the crew then assembled them.

That's a really tough question b/c then you get into the issue of how long the "flicker" has to be, and how much and what kind of pre-processed data is necessary for a minimum-length moment of conscious awareness to occur.
 
I think that this debate rests in part on whether you think human-type consciousness (there may or may not be other types) is realizable on non-biological hardware. It could turn out to be that a human brain is essential for a humanoid mind--which is equivalent to saying that the mind is not a Turing machine.

There is another aspect to the debate which seems to be about what the subjective experience of such an alternately-realized mind would be like, or even whether or not it would have subjective experience (I use this term rather than 'consciousness' because it's slightly more precise).

I believe that subjective experience occurs wherever you find feedback loops. Where you have fantastically complex feedback loops like in the human mind, you have a rich and colorful subjective experience. Wherever you have simple feedback loops--like pointing a video camera at its monitor, you have a correspondingly simple subjective experience.

If this story turns out to be the case, then it seems that the pencil-and-paper-and-human mind would experience whatever the human encoded as sensory input, and at whatever speed the human was able to enter it. Animals (and plants, to a degree), experience the world at all kinds of time-scales. The extremely slow speed might be similar to how a redwood tree would experience the world. It might be like a gnat trying to imagine what the world looks like to a relatively slow-moving human.
 
Last edited:
There is this theoretical world where billions of neurons can be simulated by pencil and paper ...

I'm not sure I really want to dive into the "pencil brain" thought experiment again, but if we do, I'd have to ask for a complete description of the hypothetical set-up.

Also, if we're talking about simulating billions of neurons, what sort of simulation do we mean?

Are we talking about a virtual or actual simulation?

For example, let's say I want to simulate the impact of an object on another object. For instance, testing the theory that dislodged foam could have damaged the space shuttle's protective tiles. Can't afford to actually launch a shuttle and intentionally dislodge some foam, so I have to simulate.

I can do it virtually -- that is, run a computer simulation wherein I create a virtual world with virtual objects that have all the right virtual properties and send my virtual foam chunk hurling into the virtual tile at the right virtual speed to see what happens.

Or I can do an actual simulation, wherein I take some foam and cut it to the right size, take a section of tile and set it up good and steady, then simulate the event by somehow shooting the foam toward the tile at the right speed from some sort of cannon.

In the latter case, if it works, I end up with a busted tile. It's a simulation of an event, but the simulation essentially replays the event in reality, thereby reproducing it. The event happens again.

In the former case, there is no busted tile, and there is no impact event. There are computer parts moving and electrons being excited and heat being produced, etc., and all that happens in such a way as to remind me of an actual event.

If the only observer to the actual simulation were a dog, the event would still have happened. There would still be one more busted tile in the universe.

If the only observer to the virtual simulation were a dog, then there wouldn't even be any simulation, when you get right down to it, just the physical and electronic actions of the machine.

That's why I say roger is wrong to say that I'm somehow introducing dualism into the pencil scenario (as I understand it -- perhaps I was incorrect about the setup he proposed). As I envisioned that scenario, it was inherently dualistic. You actually do have the equivalent of a homunculus, a ghost in the machine -- that is, the man behind the pencil. In the human brain, or in a circuit brain, there is no homunculus.

So if we get into the pencil brain thing, I think it would help to have a complete description of what's going on, and clarity regarding whether this is supposed to be an actual or virtual simulation.
 
Reading back over this, I noticed it's a little disjointed. I'm at work trying to get my thoughts out relatively quickly, so there's that.

Also, I know my idea about subjective experience might seem to do violence to our everyday conceptions of what what it is, but since SE is by nature not accessible to outsiders, I don't think it's much of a problem. I have a decent idea what it's like to be you, less of an idea what it's like to be a bat, even less for a worm... Who's to say that there is no "what it's like" to be the stock market, or a video camera-and-monitor, or a heater-and-thermostat.
 
How do you describe the experience of consciousness?

Attempting that can sometimes lead into unnecessary distractions.

I can give examples that might serve the purpose better.

There are some interesting ones that illustrate your earlier point about consciousness being temporally dislocated.

Most of us have had the experience of lazily skimming a book or an article, maybe thinking about what we need to do in the yard that afternoon, then suddenly we realize, "Hey, wait a minute... did he just say....?" We realize, after the fact, that a second or two ago we read something startling or bizarre.

We go back up the page and sure enough, that's what the author wrote.

In our distracted state, our brain was processing the words, but it wasn't moving some of the results of that processing into the areas of the brain that focus our conscious awareness onto them because that module was busy pondering whether to try to fix the bird feeder or just tear the damn thing down and start all over again.

But in the course of that processing -- which involves multiple association tasks -- certain associations were made that caused the brain to flag a particular phrase as more important than the bird feeder, so it was routed into the "be aware of this" module, bumping out the bird feeder.

There's an instance of non-conscious (or co-conscious) processing v. conscious processing.

Another familiar scenario is being at a conference event or a party or some such where lots of conversations are going on. Your mind hears everything in earshot, but you're only consciously aware of the conversation you're involved in.

But the mind is doing triage all the time. If you suddenly remember that you forgot to call your wife half an hour ago like you promised, for a moment that thought will occupy your conscious processing space (as I said, CTM models can be very useful) and you'll tune out the conversation. Chances are, you'll have to say, "Oh, I'm sorry, I just remembered something, I have to call my wife right now," and you will have "missed" the last thing the other person said to you, even though you could not have avoided "hearing" it.

Something similar happens when we hear our name spoken within earshot. It jumps out from the background noise of surrounding conversations and music there at our conference or party. Our brain flags it as important and pushes it into the brain modules that handle conscious awareness.

If it turns out to be the voice of someone you don't know talking about something irrelevant, it's just a blip and you continue your conversation essentially uninterrupted. If it's your wife, or your boss, and she's talking to or about you, you're likely to tune out the conversation for a moment, then have to refocus and say something like "I'm sorry, I couldn't hear you there" to get the other person to repeat what they just said.

So consciousness is the experience of being "aware" of something, whether it's an event happening now, or an idea, or even an event that has recently ended.
 
Also, I know my idea about subjective experience might seem to do violence to our everyday conceptions of what what it is, but since SE is by nature not accessible to outsiders, I don't think it's much of a problem. I have a decent idea what it's like to be you, less of an idea what it's like to be a bat, even less for a worm... Who's to say that there is no "what it's like" to be the stock market, or a video camera-and-monitor, or a heater-and-thermostat.
And I thought I was the only non-p-zombie!!!
 
Another interesting case that illustrates conscious v. non-conscious....

I wish I could recall the race, driver, and track, but I can't.

The case involved a driver who avoided smashing into a wreck. By all accounts, he should have smashed into it, since it was on the other side of a banked curve and he was going at a speed that would have made a collision unavoidable had he reacted only when he could see it.

But he slowed down before he could see the wreck.

It was a fascinating case of "instinct". The driver said he had no idea why he slowed down -- there was no smoke, and he hadn't heard the crash.

But on further investigation, it turned out not to be instinct at all. While watching car-cam tapes on replay, the driver spotted the clue. It was the spectators.

Normally, they're looking toward the oncoming drivers and cheering.

On the tape, he could see that they were all looking away and no one was cheering.

In real time on the track, his brain had picked up that clue as he approached the curve and had slapped an enormous red flag on it saying "SLOW DOWN NOW!" Because of the importance of that message, his awareness of the spectators was pushed back out of his "be aware of this" module so quickly that it became effectively subliminal, and his urge to slow down "felt" instinctive.

But upon reviewing the video, he instantly spotted the clue that had activated his "instinct" and recalled what it was that had tipped him off.
 
It's Clever Hans, the race car driver.

That same principle--getting information, but being unaware of the vector that brought it, is most likely at work in all supposed cases of ESP--at least the ones where the proponent or 'psychic' really believes in the powers. The vector is inevitably mundane, but surprising, nonetheless.

Your example brings to mind a notion in epistemology that says that knowledge is justified true belief. In the driver's case, he had a belief, it was true, but did he have justification for it? Many would say not until he realized how he acquired the true belief. But that's a tangent conversation...
 
I certainly agree that it is extremely useful to model the mind, and neurons, in that way, and that tremendous strides are being made with that model in what everyone must admit are our early explorations of brain function. But in no way has a broad-based CTM been proven. Not even close.
I'm running out of time to keep up with this thread. So, I'll perhaps unfairly only respond to a bit. However, it is the crux.

Okay, certainly it has not been proven, but it follows from everything we know about physics. Yes, physics. Physics is, as far as we know, computational. Certainly QM is - our predictions and calculations have reached a level of precision that we have never achieved in any other field.

From physics you get to chemistry. Again, chemistry is computational, so far as we can tell. We conclude this in two different ways. First, we observe that we can compute everything that we have seen so far. Second, reductionism. Chemistry devolves to physics, or QM. Put another way, QM in a macro environment is described as chemistry. And, as we know from Turing, any combination of computable elements is also computable.


My field, language, is one area in particular where CTM has not yet provided as robust an explanatory framework as we might hope. (Here's a sample critique from 2006, for example.)
"Explanatory" - I don't want to be one of those people who grasp a word out of context, but I think you probably chose this word well.

QM is not a good explanatory model of chemistry. No one uses QM to do chemistry, except in certain circumstances. There are far better models.

Yet, there is no doubt that chemistry is merely the sum behavior of QM.

Just because we can't right now come up with an easy computational model for language in no way means that language is not computational.

This is where my assertions of dualism comes in. You are saying the brain is chemicals and networks, both of which we have extraordinary evidence are computable, and then you say the sum of the parts is not computable. It just doesn't follow without a dualist element.


You said that my analogy with the daisies was "inapt because there is no programming controlling the swaying". What you forget is that there is no "programming" in the brain, either.
Piggy, here you go again, making assertions about a field you know little about. The network of neurons and the information stored in the neurons is the programming. It's a very basic tenet of information theory. Daises are so bad an analogy to a computational brain that I'm astonished that you are suggesting that it is in any way a rebuttal to what I am saying.

And recent studies into biological systems and how they behave and evolve give us reason to doubt that purely computational systems could evolve in biological specimens.
You'll have to cite those.


Biological systems that are very rigid, like transistors, are very bad at absorbing shocks. They are fragile. Biological systems tend to evolve with wiggle-room. They are fuzzy. Like the heart. It can take a good bit of knocking around before it goes into fibrillation...And although it is very useful to model neurons computationally, transistors they are not.
Once again you don't understand computable, and you seize on irrelevant aspects. Computable does not mean deterministic, it does not mean rigid, it does not mean an inability to handle fuzziness. And certainly physical robustness has nothing to do with it. Finally, if neurons are computable, they are computable. We are talking about equivalence, not identity.



As you probably know, a simple model of a neuron consists of a synapse where the neuron picks up neurotransmitters (NTs) from adjoining neurons. When a sufficient number of NT molecules bombard the neuron, it reaches its threshold and fires, sending a signal down its length and releasing its own NTs into the next synapse. It then re-collects the NT molecules.

We can model this set-up computationally, even writing a simple program with values for n (the number of NT molecules to meet the threshold), an increment and decrement to bring the value of f (fire) from 0 to 1 and back down to 0, etc.

And that's extremely useful.

But it's an idealization.

The biological reality is messier, more fluid, more open, with all sorts of other variables around it, and there's reason to believe that it has to be that way in a real-world evolved biological system.
Still computable.

I wonder - I read a book that I now forget, by a prominent language theorist, using arguments much like this. What a terrible book, because he understood nothing about computability. I wonder if you have been influenced either by him or the field in general. Because you have said nothing that is not computable. "Messy" "fluid" "open" are all ill-defined words in the space of information theory. More importantly, nothing you are describing is uncomputable. The book talked about things like Excel, and how it was exact, created the same result every time, and imagine if your taxes were computed differently each time. Sure, but only because the algorithms chosen were for computing taxes. What a misunderstanding of computing - the same misunderstanding you are showing.

So we cannot be certain, and we have good reason to doubt, that neurons actually are purely computational components, even though it is useful to model them that way at this stage of our investigation of the brain.
Only if you don't understand information theory.

And as we scale up to less granular levels of organization, this same kind of fuzziness persists. In its real-time operations, the brain deals with all sorts of competing associative impulses, and very often makes mistakes by accepting the incorrect one (although even here computational models have proven useful, by describing the accepted association in terms of the number of "hits" -- in other words, the more numerous the associations, the more likely it is that the brain will choose that option, even if those associations have nothing to do with the task at hand).
Associations and mistakes are computable. Trivially so.

This is why I have serious doubts that a cog brain would work like a neural brain. Cogs are quite rigid, and not very handy with the kind of threshold-based workings that we see in the brain. Maybe there's a way to reproduce this with cogs, though, I don't know. But I'd have to see it to accept that it's possible. Maybe, but maybe not.
There we go with rigid Excel! Computing is not rigid, except by choice.

Can all the workings of the brain be performed by a TM? Right now, there's no reason to accept the assertion, and some very compelling evidence to make us doubt that it will turn out to be the case.
No, you are presenting personal increduality based on a lack of understanding of a field as 'reason'.

Everything is phyics. Physics is computable. Every combination of computable is computable. Without a form of dualism, brains have to be computable. Information science 101 and physics 101.

So, after all that (and ignoring the pencil-brain thing, which has turned out be a red herring and now I see why) let's look again at the speed question.
I'm completely uninterested in the speed question, especially when discussed with such a basic misunderstanding of physics and computation.
 
Piggy, as an aside, i think there is a lot of misunderstanding going on because Paul, I and others are referring to concepts that we are very familiar with, and that many books have been written about. When Paul or I say "pencil brain" we know and understand the 100 implications we both mean by that. When Paul says slow the brain down, we understand that we are not talking about the implementation domain, where for a specific implementation you cannot run slower than the impulse speed and duration of the signal. After all, what a boring question to ask - for any given substrate of course there is a speed to slow and a speed too fast. Can you imagine starting a JREF post - can I rev my engine too fast? Well, yes! Duh! Or "Can I run my car engine at 0.000001 rpm" - NO! But we could make an engine to do that, if we wanted.

So from my perspective you are arguing extraordinarily strange things - like comparing a pencil brain to a fart. But then I have at least 10 books by leaders in the field under my belt on this one topic alone, and dozens more on computational theory and the like. I guess I can see where you are coming from if you don't recognize the referents, but on the other hand, recognize we are talking in professional shorthand. Pencil brain for us is a UTM. It's a useful thought experiment because it challenges preconceptions - "how could a pencil think" type feelings. Of course, we aren't saying the pencil thinks, but the system produced by the pencil. It gets right to the crux of the matter.

A physicist might say "acceleration times time is velocity" - in that statement is the assumption that we aren't at relativistic speeds, that we are dealing with macro objects where Heisenberg effects are below our measurement accuracy, all kinds of things that don't need to be explained. A literalist JREFer, fresh from reading a bit of Einstein for the first time, hopping into the conversation, would be sputtering "but relativity states....", etc.

There are genuine misunderstandings about computation in this thread as well, but a lot of the argument is of this nature.
 
Piggy, one more thing.

Let's forget about "pencil brain", since it is introducing so many spurious assumptions. Rest assured that when Paul or I say pencil brain all we mean is a form of a computer that is functionally identical to any computer you can think of. Ditto "cog brain". This is based on Turing's work, which is probably one of most important pieces of mathematics done in the 20th century, and extremely well vetted. If you can think of an objection, rest assured you are misunderstanding something.

So, instead of pencil brain, assume we were always talking about a 'big blue' level of computer on steroids. 10^20 processors running in parallel, with a clock speed 1 trillion times as fast as the current clock speed of big blue, with 10^32 words of memory available to each processor, with a complex, adaptable network connecting the processors that can be completely reconfigured in 1 clock cycle. Heck, assume asynchronous clock timing (each processor running on it's own clock) if you want, which makes some computations easier, some harder. Each processor has 32 cores in it, all running in parallel. Equip every processor with a true random number generator. All this in 1 square inch Etc. I assure you what we meant by 'pencil brain' can do everything and anything this super-big-blue can do, except of course slower. Slap that pencil brain in a relativistic capsule, and it'll keep up with big blue on any computation possible.

Next, assume that super-big-blue is in a robot body, connected to senses as complex as you like. Vision, tactile, heat sensors, whatever.

No homoculus in the machine, no human interpreting results, no "virtual" tiles being broken. If super-big-blue gets inputs about a tile, it came though that vision and tactile system, the actions it does goes back through the robot's body, and the tile "really" gets broken. (it really doesn't matter if this is simulated or real, but since you are hung up on that point, assume a real robot body)

This is what we have been talking about all along, in the shorthand of 'pencil brain'. But since that is a sticking point for you, try super-big-blue-robot instead without worrying about Turing's math, or 'virtual' vs physical.
 
Last edited:
I'm going to do something I'm not proud of, and slag a book I haven't read. I read the reviews of this book when it came out (such as in the NYT Review of Books, which did several pages on it), and it just didn't thrill me. Admittedly, it touches on what we are talking about here, as he proceeds from the assumption that thought is nonalgorithmic, and thus not implementable by a UTM. Problem is, he does that without a shred of evidence, and goes on to speculate on a QM brain implementation, again without any evidence.

It'd be astonishingly interesting if it turned out the brain was nonalgorithmic, but I can't bring myself to read pure speculation, especially in the face of so much evidence that neurons and the networks they form are computable.

I do like Pinker (I went to see him speak recently, btw), and I'll definitely check out Damasio, whom I've never read.


I haven't read it either, but I do a fair bit of slagging myself. In particular, you can find a good number of professional criticisms of the logic he uses to "show" that human consciousness isn't turing equivalent. I can explain it myself but I am sure you will be able to find some yourself if you are interested -- just google "lucas-penrose criticism" lol.

EDIT: You will get more hits if you goodle "lucas penrose FALLACY" instead, lol. That should tell you something.
 
Last edited:
I'm not sure I really want to dive into the "pencil brain" thought experiment again, but if we do, I'd have to ask for a complete description of the hypothetical set-up.
We were talking about a hypothetical world, right? In this world, it is actually possible to use billions of pencils and paper with one for each neuron, and each neuron is described in terms of when it fires, based on what input, and what output the firing results in.

Any brain input will have to be simulated too, as will any brain output. The simultaneous firing of some neurons can also be simulated.

This huge paper machine will in principle be able simulate a brain complete with consciousness, but obviously each simulated millisecond will take a few centuries to finish in real time.

After some millions of years, enough simulation will have been performed for the paper machine to have experienced consciousness, but because this consciousness consists of apparently random firing patterns of neurons, nobody will be able to recognise it.

If anybody wanted to have a conversation with this paper machine, they have to input the simulated aural signals into the appropriate neurons, and wait for some billions of years before the simulated neurons that govern the simulated speech system fires in the patterns that a normal brain would do to make speech.

Also, if we're talking about simulating billions of neurons, what sort of simulation do we mean?

Are we talking about a virtual or actual simulation?
I can't believe you are asking this question - or I may have no idea what you are thinking about! Practically anything you do on a computer is a virtual simulation. A pencil/paper simulation could never be an actual simulation of anything that does not involve pencils and paper!

If the only observer to the actual simulation were a dog, the event would still have happened. There would still be one more busted tile in the universe.

If the only observer to the virtual simulation were a dog, then there wouldn't even be any simulation, when you get right down to it, just the physical and electronic actions of the machine.
Exactly. So why did you ask?

That's why I say roger is wrong to say that I'm somehow introducing dualism into the pencil scenario (as I understand it -- perhaps I was incorrect about the setup he proposed). As I envisioned that scenario, it was inherently dualistic. You actually do have the equivalent of a homunculus, a ghost in the machine -- that is, the man behind the pencil. In the human brain, or in a circuit brain, there is no homunculus.
It is essential for the simulation to be able to simulate every single element that is part of the consciousness in a real brain. Neurons are fairly simple as far as we know, and they are governed by simple rules, which is excellent for simulations. However, as long as we do not know for sure how exactly to achieve consciousness, we also cannot be sure that we have got all elements right. For this theoretical paper machine to be certain to work, all neurons will have to be simulated, and if neurons have more sophisticated functions than we know today, these would have to be simulated too. If there are other cells that have a function in consciousness, these too will have to be simulated.

Once we know exactly how to achieve consciousness, ie, we have a working CTM, then we may be able to reduce the number of elements, both in types and quantity, and this is what is the goal for CTM, because obviously, a paper simulation, or even a super fast complete computer simulation of a brain is too impractical for us at this stage.

So if we get into the pencil brain thing, I think it would help to have a complete description of what's going on, and clarity regarding whether this is supposed to be an actual or virtual simulation.
We do not have to discuss the pencil brain thing. It really does not matter on what hardware the simulation runs. The important point is really that we are of course talking about a virtual simulation, and all attempts at simulating consciousness or intelligence have been virtual. One day we could probably do actual simulations on biologically simulated brains, but I think that virtual simulations are much easier to implement.

It is essential for our current understanding of consciousness that all elements that are part of consciousness are working according to a fixed set of rules. That means that the brain is a Turing Machine. If it turns out that neurons can change states in unpredictable ways, we might not be able to simulate a brain with paper machines or any other machine.

Interestingly, our brain is in fact changing states unpredictably because of damage from chemicals and cosmic waves, and some of these states that may have been caused randomly in this way could theoretically lead to new insights for that brain. However, as our present understanding goes, consciousness is not dependent on random influence and damage.
 
Yeah and I use a calulator to do things I can't figure out on my own. And guess what? The calculator was programmed only by someones conscious input. As is every program.

I cant see what difference a global search heuristic program has to any other program. The computers are doing what we consciously tell them to. Nothing more, nothing less.

You are doing what your DNA tells you to do. Nothing more, nothing less.

So... what was your point, again?
 

Back
Top Bottom