• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Has consciousness been fully explained?

Status
Not open for further replies.
I see little to distinguish between the words in this philosophical arena:

Implies

1. To involve by logical necessity; entail
 
1) Speaking to people in the 3rd person is bizarre.

2) You neglected to actually defend yourself against my claim regarding your motives. Does this mean your motives are what I claim they are?

It means I won't dignify it with a discussion. If my arguments are sound, then it doesn't matter what my motives are. If my arguments aren't sound, then it doesn't matter what my motives are.

3) I asked you how time dilation fits with your claim about "real" processes having "time dependence," and to this day you still haven't given an answer. Instead you just wave your hand and state that I don't understand relativity. lol

Time dilation and relativity has simply no relevance to whether or not something which controls a physical form needs to be time dependent or not. Either the controller and the thing controlled are in the same inertial frame (which is invariably the case) and relativistic effects don't apply, or else the controlling brain would need to allow for any such relativistic effects. It certainly doesn't imply in any way that a control mechanism such as a human brain is not time dependent. It's just a random comment on the nature of time.

I repeat the same point I've made numerous times. The functionality of a Turing Machine is not sufficient to perform real-time control and monitoring. That's a simple fact. It entirely demolishes the PM claim that a TM can do anything that a brain can do. It quite simply can't. And the red herring of the TM abstraction vs. the TM implementation is as irrelevant as the introduction of relativity. If we are modelling what is necessary to perform control and monitoring functions, then the Turing model is not adequate. That means that the essential function of the brain is not a Turing Machine implementation, and any assertion that implementing the appropriate Turing Machine is sufficient to produce consciousness should be viewed in that light.

It's important to note that different control systems have entirely different time dependencies. I've worked with systems that gather data at 50 Mhz, and I've worked with systems that check inputs every few seconds. In either case, the timing element is critical - just as with the human brain.

Actually, now I see why you'd rather talk about motives.
 
I see little to distinguish between the words in this philosophical arena:

Implies

1. To involve by logical necessity; entail


OK, I used the wrong word. Suggests, perhaps? Dualism is a likely explanation for someone expressing that sentiment; but not all who do so are dualists.
 
I agree with the argument. If you don't have a definition of SRIP that excludes things you don't consider conscious then it's useless as a definition/explanation of consciousness. The onus is on you to properly define the terms you propose being relevant.

Of course, most people would not consider electronic toasters or computers conscious, so even if you manage a necessary and sufficient definition that includes toasters, computers, certain other machines and animals with brains, while excluding everything else then you will not have come up with a definition capturing what most people consider consciousness to be.

Just forget about SRIP and consciousness for a second.

Think only about the simpler abstractions that computer science uses. You know, things like state machines, basic operations, the fundamentals of computation, etc.

Can these describe any system in the universe? Perhaps.

Now think about some of the more complex abstractions, for example a certain algorithm like Dijkstra's shortest paths or quicksort or whatever. Or, more pertinent, the working of an artificial neural network.

Can these describe any system in the universe? Absolutely not.

They can only describe a very small subset of systems that happen to satisfy the constraints required. The idea that some system in a plain old rock somehow satisfies all the constraints for it to be modeled as an artificial neural network is just wrong.

Do you disagree?

ETA-- let me put it another way.

I can take an abstract description of a certain algorithm and build up any number of actual physical systems that instantiate that algorithm. And in every case, as long as the systems satisfy the necessary constraints, the behavior of the systems is consistent with the predicted behavior from the abstract description.

Likewise, I can look for systems that satisfy the constraints. If I find one, it also will behave in a way consistent with the abstract description.

This isn't limited to computer science, it applies to anything. If you find a system or build a system that satisfies the constraints for what we call "running," the system will propel itself forward on the ground. Plain and simple. Otherwise, it would simply not satisfy the constraints.

Likewise, if you build a system or find a system that satisfies the constraints for what we call "spatial and temporal summation" ( or whatever the agreed upon "thing" a neuron does is called ) it will do what a neuron does. Plain and simple. Otherwise, the system wouldn't satisfy the constraints.

The argument that any system can run, or that any system can do what a neuron does, is just wrong. So also is the argument that any system can instantiate all of the complex algorithms of computer science.

That just doesn't happen and, as has been said like 100 times, that is why Westprog is typing on a computer instead of a block of cheese.
 
Last edited:
Time dilation and relativity has simply no relevance to whether or not something which controls a physical form needs to be time dependent or not. Either the controller and the thing controlled are in the same inertial frame (which is invariably the case) and relativistic effects don't apply, or else the controlling brain would need to allow for any such relativistic effects. It certainly doesn't imply in any way that a control mechanism such as a human brain is not time dependent. It's just a random comment on the nature of time.

It isn't random.

Relativity only exists because what we call time is actually just the order of causal events. Without events, there is no time -- period. That is all time is -- a sequence of events.

Tell me again -- is "sequence" and "event" part of the "Turing" model? Eh?
 
It isn't random.

Relativity only exists because what we call time is actually just the order of causal events. Without events, there is no time -- period. That is all time is -- a sequence of events.

Tell me again -- is "sequence" and "event" part of the "Turing" model? Eh?

It's not an adequate part. And this can be shown very quickly in the description of a simple real time program, which will not conform to the Turing model.

There's no need for abstruse speculation about relativity. This is something known to people who actually program control systems. Programs have to be able to handle asynchronous events - that is, events where the sequence is not known in advance.
 
OK, I used the wrong word. Suggests, perhaps? Dualism is a likely explanation for someone expressing that sentiment; but not all who do so are dualists.
Sounds fair. Thanks. :)

Earlier RD posted

"You have to be careful with "meaning" though because many people will take it out of context -- look how many people in this thread alone want to claim that "meaning" requires consciousness to begin with. Hello, circular logic!

People well versed in the mechanisms of natural selection can see how the meaning the activation of a neuron is objectively defined by how that neuron evolved. People without such imagination ... can't. For them, meaning is inherently linked with consciousness.

I wish there were better words for these things, since "meaning" and "purpose" and all the rest are so nested in our anthropomorphic tendencies."

You had earlier seemed to agree with the concept.

If I parse the sentence I italicized, "People well versed in the mechanisms of natural selection can see how the meaning [of] 'the activation of a neuron' is objectively defined by how that neuron evolved.", does that seem to be the intent?

If my parsing is correct, I'd note that 'natural selection' is a meaningless term without a lifeform, capable of faulty reproduction, available for any selection having meaning to be possible. If so, consciousness (assuming life is conscious) is required for "meaning". Or have I missed the point?
 
What is the argument?

I don't get what the "computationalist" position is other than "consciousness is SRIP" and "a simulation of a conscious entity would actually have consciousness". Neither of these seem to have been supported.
Then you haven't been paying attention. The former is definitive, though possibly incomplete (that's a question of semantics); the latter is established beyond rational dispute through mathematics and mathematical physics.
 
It's not an adequate part. And this can be shown very quickly in the description of a simple real time program, which will not conform to the Turing model.
Mathematical proof, please.

There's no need for abstruse speculation about relativity. This is something known to people who actually program control systems. Programs have to be able to handle asynchronous events - that is, events where the sequence is not known in advance.
Yes, we know that. What you are claiming is that this cannot be mapped to a Universal Turing Machine. You need to actually show this, rather than merely assert it.
 
Simulated water would not be wet in this world and a simulated orange would not be a 'real' orange. Within the simulation, however, the water would have all the properties that water has in this world and the orange in the simulation would be the same in the simulation as an orange in this world. A simulation of a rolling ball would not be a ball rolling in 'reality' but the 'rolling' would be identical -- only the ball would not be real. Simulated action is still action; as long as it occurs for the same reasons (in other words, as long as it isn't just some code saying 'put a pixel here and make it make the pixels change to make it look like a ball is rolling on a screen). The action of rolling or the action of consciousness still occurs in the simulation just as it does in the real world. I don't see a way of separating the two actions (except for where they occur). There is nothing special about consciousness in this regard. The difference, at least to my mind, is between objects and actions. Objects can only be isomorphic, but I think actions should be identical because they only consist in a relation of parts.

You don't see the problem? If simulated "water" isn't really water, and simulated "wet" isn't really wet, then simulated "consciousness" isn't...
 
You don't see the problem? If simulated "water" isn't really water, and simulated "wet" isn't really wet, then simulated "consciousness"...
... is really consciousness.

What's the alternative, Malerin?

Dualism?

Magic?

Or just flat out logical fallacies in your argument?

What do you want to say here?
 
Simulated water would not be wet in this world and a simulated orange would not be a 'real' orange. Within the simulation, however, the water would have all the properties that water has in this world and the orange in the simulation would be the same in the simulation as an orange in this world.

But I think we agreed there is no actual simulation world or any other world outside "this world" that we know of. We can speak of one that is constructed from the abstraction of our own conceptualization. As you said:

"Well, isn't the whole 'simulation' thing a bit of a polite fiction anyway? I am no computer whiz -- I built a couple of them and took one programming course in Pascal many, many years ago -- but isn't programming just a top-down way of getting the electrons in the machine to go where we want them to?"

"[...] The simulation is the process of electrons moving around through gates, but we can talk about that process as "another world" just as we can talk about a simulated orange as an orange. It isn't really an orange; it is really a process, an action. [...]"

So it should be fair to say that if simulated water would not be wet "in this world", that is the same as saying that a simulation of water would technically not be wet or produce wetness and a simulated orange would technically not be an orange or produce an orange.

A simulation of a rolling ball would not be a ball rolling in 'reality' but the 'rolling' would be identical -- only the ball would not be real.

How could the rolling be "real" if the ball isn't real? The computer chips aren't rolling. And the ball can't be rolling since it isn't real.

Would we really say "the simulation of the ball is rolling" as opposed to "the computer is simulating a ball rolling"?

Also, is it really warranted to safely label consciousness as being defined as an action rather than a property? I don't think it's well enough understood at a physical level to make such an assumption.
 
... is really consciousness.

What's the alternative, Malerin?

Dualism?

Magic?

Or just flat out logical fallacies in your argument?

What do you want to say here?

If simulated water isn't really wet then what's the alternative, PixyMisa?

Dualism?

Magic?

To make the case that there is any logical fallacy at play, you need to explain why one is true but not the other.
 
But I think we agreed there is no actual simulation world or any other world outside "this world" that we know of.
I don't know if you agreed to that, but it's certainly not true.

So it should be fair to say that if simulated water would not be wet "in this world", that is the same as saying that a simulation of water would technically not be wet or produce wetness and a simulated orange would technically not be an orange or produce an orange.
Fair? Don't know about fair. Incorrect, yes.

Also, is it really warranted to safely label consciousness as being defined as an action rather than a property?
A process, rather than an action, I'd say. But that's semantic nitpickery.

It's certainly known that it's not a property. I don't see how describing it as a property could even be meaningful.

I don't think it's well enough understood at a physical level to make such an assumption.
Assumption? Minds are brain activity, cornsail. There's no assumption there.
 
If simulated water isn't really wet then what's the alternative, PixyMisa?
Simulated water is really wet in the simulation.

You're making a category error.

To make the case that there is any logical fallacy at play, you need to explain why one is true but not the other.
Already have. Already have dozens - actually, hundreds - of times. No coherent counter-argument has been provided at any point.

You can't interact with simulated water in all the ways you can interact with water in your world.

You can interact with a simulated mind in all the ways you can interact with a mind in your world.
 
Actually, now I see why you'd rather talk about motives.

Yip and Freud also had motives that turned out wrong.

Artificial Intelligence enthusiasts like to characterize their opponents as inventing a problem of consciousness where there needn't be one in order to preserve a special place for people in the universe. They often invoke the shameful history of hostile receptions to Galileo and Darwin in order to dramatize their plight as shunned visionaries. In their view, AI is resisted only because it threatens humanity's desire to be special in the same way the ideas of these hallowed scientists once did. This "spin" on opponents was first invented, with heroic immodesty, by Freud. While Freud was undeniably an decisive original thinker, his ideas have not held up as well as Darwin's or Galileo's. In retrospect he doesn't seem to have been a particularly objective scientist, if he was a scientist at all. It's hard not to wonder if his self-inflation contributed to his failings.

Machine consciousness believers should take Freud's case as a cautionary tale. Believing in Freud profoundly changed generations of doctors, educators, artists, and parents. Similarly, belief in the possibility of AI is beginning to change present day practices both in areas I have touched on- software engineering, education, and military planning- and in many other fields, including aspects of biology, economics, and social policy. The idea of AI is already changing the world, and it is important for everyone who is influenced by it to realize that its foundations are every bit as subjective and elusive as those of non-believers.
http://www.jaronlanier.com/aichapter.html
 
I don't know if you agreed to that, but it's certainly not true.

How can something exist not in the world? Everything a computer simulation "does" can be described in terms of real physical activity inside the computer. There is no separate world created by the simulation.
 
Status
Not open for further replies.

Back
Top Bottom