• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Has consciousness been fully explained?

Status
Not open for further replies.
No, it's not... possible? As before, I am at a loss to understand what you are trying to say.

Impossible for you to conjecture that you're in a simulation? But that's exactly what you are doing.

If your post is intended to prove a conclusion you are merely begging the question.

OK, got conjecture mixed up with conclude. Yes, it's possible to conjecture, impossible to conclude.
 
Okay, but that doesn't seem to have anything to do with my question. I'm not talking about a model of computing, just as I'm not typing this post on a keyboard connected to a model of computing. I'm talking about computers -- you know, those boxes with circuit boards covered with chips inside, like the ones you can buy at Staples or the big fancy ones they build for special IBM projects like Watson.

It's been asserted repeatedly in this thread that a computer by itself, no matter how powerful its computing capabilities and no matter how it is programmed, is insufficient to generate consciousness, just as it is insufficient to permit the computer to really walk or control a real power plant or play music. But we know that a computer can do all of those things if it is sufficiently powerful, suitably programmed, and connected to the necessary additional equipment. So my question is, what additional equipment must be attached to a sufficiently powerful and suitably programmed computer to generate consciousness? And if the answer is that we do not or cannot know, then what rationale do we have for assuming that any additional equipment is needed?

Respectfully,
Myriad

It's also been repeatedly said, by the people opposing the Strong AI viewpoint, that there's a huge difference between a computer running a program to process data, and a robot which can operate in real time. In spite of RD's claims that the two models are equivalent, they are, in fact, very different. The kind of program needed to control a robot (and it need not be a humaniform robot) is not at all the same as the kind of program needed to simulate a robot or person.
 
And this....

I have never said any such thing and in fact have maintained precisely the opposite from the get-go.

You're free to continue to claim otherwise, but this is precisely what makes meaningful participation in this thread impossible.

No, your posts continue to be meaningful, and it's quite clear that you've not said anything remotely like what Pixy has claimed. While it might seem frustrating, there is quite a bit of progress being made among the people who haven't made up their minds in advance.
 
One last post because I do need to clear this up....

Your conclusions are incorrect, because driving also requires an apparatus with a proper configuration of parts and the right fuel -- which was the analogy I was making -- but this does not make driving a substance.

Which should have been clear all along.

Sorry, then. It's just that what you claim to be saying and what you seem to be saying are different, from my point of view.

Adios, folks, time to start a biology-based thread and be done with the nonsense.

Hopefully you don't think that only biological 'machines' can be conscious. Otherwise you'll have to justify that. After all, legs can be made of pretty much any substance. There's that word, again.
 

Yes, it has. That's the primary contention of the computational viewpoint. In particular, Pixy and RD have insisted that the Turing model is sufficient to explain consciousness. Pixy has said that the Church-Turing thesis proves that a Turing machine can be conscious. RD has claimed that because events happen sequentially, the Turing analysis is effectively equivalent to a real-time computer model.

If someone is contending that performing computations, in and of itself, sufficient to produce consciousness, then I strongly disagree with that. That is not the same thing as a claim that some kind of artificial consciousness is possible. That is a far weaker claim, and one that I wouldn't especially quarrel with.

Misrepresentation of this viewpoint is so routine on this thread that I find it necessary to restate the above almost continuously. Perhaps there should be a FAQ.
 
I was expecting an answer to my post, but it seems dodging what all you could manage.

You seemed to be claiming that what happens "in a simulation" had something to do with relativity. I don't know what that might be. I regard "in the simulation" as being a fiction. I don't think that there are any people "in the simulation" experiencing anything. I don't concern myself with the simulated consciousness of a "person" in a simulation, any more than I concern myself with the feelings of Elizabeth Bennet if I fail to finish Pride And Prejudice.
 
laca said:
No, it's not... possible? As before, I am at a loss to understand what you are trying to say.

Impossible for you to conjecture that you're in a simulation? But that's exactly what you are doing.

If your post is intended to prove a conclusion you are merely begging the question.

OK, got conjecture mixed up with conclude. Yes, it's possible to conjecture, impossible to conclude.


The doctors are reassured now that you find it impossible to conclude you're in a simulation :D
 
Reposting this from the bottom of the previous page for the benefit of rocketdodger.

Frank all of your arguments remind me of something like "a chicken has 2 legs, and an ostrich has 2 legs, so if you add a chicken and an ostrich, you get a horse, since a horse has 4 legs."

This is a result of too much philosophy, I suspect, and not enough fundamentals. Most philosophy breaks down if you try to reduce it to fundamentals -- including yours. You can't just expect well formed sentences to actually make sense in terms of the real world without actually defining and understanding the meaning of the words in those sentences (well, you can, but that would be philosphy ... ).

In particular, it is clear that anyone referencing the external world vs. the simulated world we "might" be in is merely projecting their understanding of simulations within our own frame outward. Kind of like if you didn't have a mirror you would need to look at the back of someone else's head and just assume "the back of my head might look like that."

So no, there is no reference of the external world. Just like looking at someone else's head, and imagining what your's might look like, is not a true reference to the back of your own head. What you have in your mind is merely a reference to what you think might be the back of your head.
 
It's also been repeatedly said, by the people opposing the Strong AI viewpoint, that there's a huge difference between a computer running a program to process data, and a robot which can operate in real time. In spite of RD's claims that the two models are equivalent, they are, in fact, very different. The kind of program needed to control a robot (and it need not be a humaniform robot) is not at all the same as the kind of program needed to simulate a robot or person.

Can you show us a single reference to a "robot that can operate in real time" that is not controlled by "a computer running a program to process data?"

Really -- just a single reference?

Can you even begin explain how such a thing might be engineered?

Or are you thinking of those magical clockwork steampunk robots from fantasy movies like "Labyrinth?" Didn't you know, westprog? ... those aren't real.
 
Frank all of your arguments remind me of something like "a chicken has 2 legs, and an ostrich has 2 legs, so if you add a chicken and an ostrich, you get a horse, since a horse has 4 legs."

This is a result of too much philosophy, I suspect, and not enough fundamentals. Most philosophy breaks down if you try to reduce it to fundamentals -- including yours. You can't just expect well formed sentences to actually make sense in terms of the real world without actually defining and understanding the meaning of the words in those sentences (well, you can, but that would be philosphy ... ).

In particular, it is clear that anyone referencing the external world vs. the simulated world we "might" be in is merely projecting their understanding of simulations within our own frame outward. Kind of like if you didn't have a mirror you would need to look at the back of someone else's head and just assume "the back of my head might look like that."

So no, there is no reference of the external world. Just like looking at someone else's head, and imagining what your's might look like, is not a true reference to the back of your own head. What you have in your mind is merely a reference to what you think might be the back of your head.


Though you might be a mind reader I am not.

Not sure what you are trying to say except that there is no reference to the actual external world in your hypothetical simulation then. Pretty much what I suggested you try.
 
Though you might be a mind reader I am not.

Not sure what you are trying to say except that there is no reference to the actual external world in your hypothetical simulation then. Pretty much what I suggested you try.

No, you specifically said
If you call it a computer program feature you acknowledge that computer program feature within the external world and destroy your own argument.

I am showing you that this option is incorrect -- even if you call it a computer program feature you are not acknowledging anything about the true external world. Even my phrase "true external world" doesn't reference what you seem to think it does.

Thus the argument is not destroyed if we choose to call our coffee cups "computer program features."
 
Last edited:
Yes, it has. That's the primary contention of the computational viewpoint. In particular, Pixy and RD have insisted that the Turing model is sufficient to explain consciousness. Pixy has said that the Church-Turing thesis proves that a Turing machine can be conscious.

That's an odd thing for Pixy to have said, then, because Turing machines don't exist.

You seemed to be claiming that what happens "in a simulation" had something to do with relativity.

No, I was using the example of relativity to illustrate the importance of context.
 
rocketdodger said:
Though you might be a mind reader I am not.

Not sure what you are trying to say except that there is no reference to the actual external world in your hypothetical simulation then. Pretty much what I suggested you try.

No, you specifically said
If you call it a computer program feature you acknowledge that computer program feature within the external world and destroy your own argument.

I am showing you that this option is incorrect -- even if you call it a computer program feature you are not acknowledging anything about the true external world. Even my phrase "true external world" doesn't reference what you seem to think it does.

Thus the argument is not destroyed if we choose to call our coffee cups "computer program features."


That argument assumes the simulation and external reality to be compatible with reference to external world objects such as coffee cups.

As I said immediately afterward: If you want to make your simulation argument successful it must be incompatible with real external-world propositions.

The question of whether we are in a simulation or the external world necessarily acknowledges the external world. A priori knowledge having nothing to do with the senses.

Since we already know we might be in a simulation or we might be in the external world I ask you: what about that cup of coffee over there... computer program feature of the simulation or an external world cup of coffee?

If you call it a computer program feature you acknowledge that computer program feature within the external world and destroy your own argument. If you call it a cup of coffee you have no argument at all.

Both alternatives take place in the external world.

If you want to make your simulation argument successful it must be incompatible with real external-world propositions. Good luck with that.


My bolding.

Are we still in agreement that your hypothetical simulation must have nothing to do with real external reality?
 
Are we still in agreement that your hypothetical simulation must have nothing to do with real external reality?

No, not at all. That is your contention alone, and it has no basis in actual logic.

The only requirement is that an intelligence limited to the simulation does not have access to information that would enable to completely determine if it was limited to the simulation.

The simulation itself might have very much to do with external reality. For example, any simulation we run within our own frame is done on hardware in our frame, and so the simulation is at least limited to what our hardware is capable of simulating. And if there is a bug or a glitch, any inhabitants of the simulation would notice it.

But they would not be able to determine with 100% confidence that the happening was due to them being in a simulation.
 
Last edited:
It's also been repeatedly said, by the people opposing the Strong AI viewpoint, that there's a huge difference between a computer running a program to process data, and a robot which can operate in real time. In spite of RD's claims that the two models are equivalent, they are, in fact, very different.
No, they're mathematically identical. You're simply confused.
 
rocketdodger said:
Are we still in agreement that your hypothetical simulation must have nothing to do with real external reality?

No, not at all. That is your contention alone, and it has no basis in actual logic.

The only requirement is that an intelligence limited to the simulation does not have access to information that would enable to completely determine if it was limited to the simulation.

The simulation itself might have very much to do with external reality. For example, any simulation we run within our own frame is done on hardware in our frame, and so the simulation is at least limited to what our hardware is capable of simulating. And if there is a bug or a glitch, any inhabitants of the simulation would notice it.

But they would not be able to determine with 100% confidence that the happening was due to them being in a simulation.


How does an entity in a simulation obtain the knowledge that it might be in a simulation?
 
That's an odd thing for Pixy to have said, then, because Turing machines don't exist.


I'll let Pixy explain whether he thinks Turing machines actually exist, and if not, how Church-Turing is relevant. But he's said things a lot odder than that.

No, I was using the example of relativity to illustrate the importance of context.
 
I'm just catching up with the thread, so pardon me if if any of this has been said already...

I have been trying to figure out just what Piggy's problem with simulating consciousness was - after all, he believes that we could, in principle, create an artificial consciousness. Superficially, it seemed to come down to the semantics of 'simulation' (which is really just a reflection that we don't have consensus about the appropriate language to use in this context). But this led to what I believe is the real issue - the independence of computational processes and their results from the reality or simulation level in which they are run.

As Piggy said, if you simulate a power station, you don't get real power. If you simulate driving, you don't really get from A to B.

Also, if you simulate a nuclear explosion you don't really blow anything up.

This is obviously because the effects are tied to the simulation level - in the simulation (assuming it is detailed and broad enough), the power station makes simulated power that can run simulated appliances, driving gets a simulated driver from sim place A to sim place B, nuclear explosions blow simulated stuff up.

But the US military spends billions on simulating nuclear explosions that don't blow up anything real because, computationally, the behaviour is the same (depending on the accuracy of the simulation). They get information that is relevant to this reality; the computational effects are similar to the measured effects of real nuclear explosions. Computational processes and their results are platform independent, they're independent of the computing 'substrate', be it reality or simulation.

So if consciousness is a computational process, the same rationale should, in principle, apply.

Chess programs with suitable I/O can play humans, or with a suitable environment (management program) they can play each other in a computer. I have software that provides an environment for several chess programs to play a tournament in my computer, and the results (the recorded moves & scores) are almost indistinguishable from the results of a chess tournament played by people in the real world. Is my computer simulating a chess tournament between chess programs or holding a real tournament between chess programs?

Suppose we were able to replace the tournament manager software with a computational model of a simple environment (e.g. a fish tank), and replace the chess programs with computational models of bodies (e.g. fish) and brains e.g. modelling the goldfish brain - if not exactly neuron for neuron, then architecturally and functionally; and then add monitoring software to allow us to track what the models do. If those simulated goldfish were to behave in the simulation just like real goldfish because their model brains were performing similar computational functions, could we not say that our model goldfish were thinking in a similar way to real goldfish?

And if we were to upgrade the environment model and the body and brain models, so that the brain models were architecturally and functionally similar to human brains, and they interacted with their simulated bodies and environment as if they were conscious because their model brains were performing similar computational functions to real brains, we could say that they appeared to be thinking in a similar way to real humans. Would we then be justified in saying that therefore they really were conscious?

Forgetting the semantics of 'simulation', if a running chess program is really playing chess, is a running brain program really thinking? really conscious?

If not, why not?
 
Last edited:
Status
Not open for further replies.

Back
Top Bottom