• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Explain consciousness to the layman.

Status
Not open for further replies.
People are machines. Reasoning: Since Pixy can use the same word to describe a person as Pixy can use to describe an automaton, it is quite reasonable to conclude that people and automatons are equivalent, if not identical (are they identical Pixy?)

But hang on, if there is one thing that has been quite conclusively established, it is that the scientific community currently engaged in the study of ‘people’ is all-but unanimously resolved that we neither know what a person is nor how a person works.

….but Pixy does. A person is a machine ( ‘rocket-science’ it’s called), therefore there can be no issue in maintaining that a person shares some fundamental fraternity with the very same laptop on which they play Donkey Kong!

Wikipedia: …under ‘machine’ we have 10 trillion items, one of which is the ‘entity’ known as human being…therefore every one of those ten trillion items must be the very same thing.

A conclusion of truly epic proportions.
It all ties back to the logical choice of ontology, and Pixy etal are logical; ergo, life, people, and brains can be nothing but machines. I keep waitng for Pixy to declare a computer program "alive".

The magic beans include the concept of god, which requires either idealism or illogical dualism as choice of ontology. Damn shame idealism can't be logically defended, although kicking a rock by no means defeated or defeats Berkeley's thesis.
 
Where is the leap from machine into being?

Well, for instance, consider how many sensory neurons we have.

It may seem like a stretch of the imagination to consider a machine with a 100x100 pixel camera, a few mechanical sensors, some touch sensors, etc, as being conscious, even if it's computer brain processed the information just like ours does.

But is it such a stretch if you consider a machine with like 500,000 x 500,000 pixel camera inputs in a binocular arrangement, millions of mechanical and touch sensors, audio sensors capable of distinguishing thousands of distinct frequencies, chemical sensors that can detect millions of molecules in any substance, including the air around it, etc ?
 
Surely not; that would be like saying that vision would be a threshold of complexity. You can't just get something in an intelligent computer by throwing complexity at the problem--you have to actually implement something.

A conscious mind requires specific implementation details to produce a sense of agency. If you could produce an intelligent computer with consciousness, this requirement wouldn't change--it would have to specifically implement the sense of agency before it could have a sense of agency.

It may not be a threshold of complexity in a fundamental sense, but I fully believe it is a threshold in complexity in a pragmatic sense.

The fact is, most people won't accept that a machine is conscious until it operates at a complexity level similar to that of humans. Yes, the smart programmers understand that really it is a few fundamental patterns of information flow, and if you reduced the level of complexity by orders of magnitude it is pretty much the same, but that won't convince anyone considering an investment in your android factory.
 
It may not be a threshold of complexity in a fundamental sense, but I fully believe it is a threshold in complexity in a pragmatic sense.
I'm not sure what you're saying. You need to have the thing have agency first, then you need to have the thing have a sense of it. Those are specific features, not merely thresholds. We can have an arbitrarily complex machine that lacks these features. Complexity isn't even the issue--implementation of those features is.
The fact is, most people won't accept that a machine is conscious until it operates at a complexity level similar to that of humans.
But punshhh was speaking specifically of the sense of being. Sure, he's talking about consciousness, but he mentioned that being is his "magic bean", and asked if it was merely a level of complexity. And the way he describes being, it seems to me he's describing agency.

The answer is, no. It's a matter of specific implementation.
 
Last edited:
The human brain is only "inextricably linked" to the pieces of the environment it is "inextricably linked" to; it is isolated from the pieces of the environment it is isolated from. Neutrinos don't have much of an effect on my brain processing, for example.

That's what determines an environment - the things it interacts with.

In nearly any system, there are pieces of the environment that affect it greatly, and pieces that don't have so much of an effect. You talk about this as if the human sensory apparatus glues us into everything going on around us.


I might "talk about this as if" but I said no such thing. What I do say is that the ability to extract information about the environment is a critical aspect of the brain.

Actually, no, that's not "the point" about a computation. The point of a computation is that it is in itself a process. Ideally you would want to isolate the computation from outside processes that may interfere with results, yes. But the isolation isn't the point of it--the things it does is.

I'm quite willing to allow that a computation is a process, and that what the brain does is also a process. My point is that they are very different processes.

Quite the opposite. A computational model is an environment. The thing you isolate the computational system with is indeed irrelevant, but that means that you can ignore that piece. And there's something left--the environment that is part of the computation.

Yes, an entirely closed environment. While the human brain records temperature, wind speed, rain and as much information about its surroundings as possible, a computation is deliberately shielded from all this in order to function.

And that is relevant. Indeed, it is the point. But that is what you are hand waving away as irrelevant.

The simulated brain is interacting with a simulated environment; and, both are processes (if they weren't, there'd be no computation--see above). So there's no "major, significant differences" in this regard. Just as my brain is affected by red photons, the simulated brain can be affected by simulated red photons. And just as my brain isn't so much affected by neutrinos, the simulation can leave out neutrinos and be fairly accurate.

The distinction between "brain" and "environment" is entirely artificial. It's all just part of the computation.

But all this means is that none of the points you raised are valid. It doesn't automatically mean the computational nature of the brain is just to be accepted. It simply means you didn't make any valid points.

Again it comes down to accepting the world of the simulation as being an actual world, with the simulated red photons being equivalent to actual red photons.
 
No, the two are fundamentally different.

This has been repeatedly asserted, but not demonstrated. What is the fundamental difference - a difference that encompasses all possible computer simulations, books and films?
 
Simulations actually run.

What does that mean? Films "run". Do you mean that there has to be a time ordering of the different relationships between the components, rather than some other ordering? That seems quite arbitrary, but in any case, doesn't exclude other representations.
 
I'm not sure what you're saying. You need to have the thing have agency first, then you need to have the thing have a sense of it. Those are specific features, not merely thresholds. We can have an arbitrarily complex machine that lacks these features. Complexity isn't even the issue--implementation of those features is.
But punshhh was speaking specifically of the sense of being. Sure, he's talking about consciousness, but he mentioned that being is his "magic bean", and asked if it was merely a level of complexity. And the way he describes being, it seems to me he's describing agency.

The answer is, no. It's a matter of specific implementation.

But it doesn't matter if you have the specific implementation and nobody believes you because it is so drastically different from what they think of as consciousness.

And the only way you get that is to have 1) the specific implementation with 2) a ton of complexity.

It doesn't matter to anyone, not even me ( and this is my profession ), if a machine has agency and a sense of it's agency yet the machine is so simple as to be useless as a conscious entity. Below some complexity threshold nobody cares if a machine is conscious or not.
 
The distinction between "brain" and "environment" is entirely artificial. It's all just part of the computation.

This isn't true.

The distinction is based on the behavior of the hardware while running the software.

There is certainly a difference in that behavior between the simulated brain and the simulated environment, whether you want to admit it or not. That's why when you play a video game you see actual stuff, rather than random snow on the screen. The differences between those bits of information aren't "artificial."
 
That's what determines an environment - the things it interacts with.
Ergo, your point disappears, because a simulated environment would be a bona fide environment.
I might "talk about this as if" but I said no such thing. What I do say is that the ability to extract information about the environment is a critical aspect of the brain.
So a simulated brain in a simulated environment has an environment. Incidentally, that's completely arbitrary as well--a simulated brain could use the environment we're in, or a simulated one; and we ourselves the same. There's no fundamental difference to be found here.
I'm quite willing to allow that a computation is a process, and that what the brain does is also a process. My point is that they are very different processes.
If that's your point, I'm lost. It appears to me you're jumping around. You were just discussing how the brain was inextricably linked to it's environment. I had just emphasized that the computational environment is a bona fide environment for a simulated brain to be linked to. And now you're comparing the brain to the computational brain. Are we talking about the environment or the brain?
Yes, an entirely closed environment. While the human brain records temperature, wind speed, rain and as much information about its surroundings as possible, a computation is deliberately shielded from all this in order to function.
But you're just comparing random things to random things. We have two pairs of entities to be concerned about--a human and a simulated human; and, an environment and a simulated environment. We could talk about all four combinations of these; the fact that we can put a simulated human in a simulated environment simply follows from the fact that it is one of the four combinations.

Now, when you use a simulated environment, you would normally like to isolate the effects of the external environment, so that you can ensure that the way the entity interacts with its environment is a result of the simulated environment. This is just as true when you use a human as it is when you use a simulated human.

In the scenario I gave earlier, I suggested being suspended in a sensory deprivation tank with VR goggles, a headset, and a hand control. The sensory deprivation tank is specifically there to isolate me from the environment other than the virtual one.

Now if you want to talk about the depth of environmental discovery human kind has reached, that's an interesting thing, but it has nothing to do with consciousness. The kid who never left his mother's basement is equally conscious.
The distinction between "brain" and "environment" is entirely artificial. It's all just part of the computation.
It can be made arbitrarily distinct. Nothing says you need to use the same platform, same sort of symbol manipulations, and so on on both sides; or that you even use a symbol manipulation or not on one or both sides (analog computation is still in play). But even if it's not distinct, the separation between simulated brain and simulated environment is no more arbitrary than the separation between physical brain and physical environment.
Again it comes down to accepting the world of the simulation as being an actual world, with the simulated red photons being equivalent to actual red photons.
And you're compelled to accept this by definition. If there's no actual world, there is no simulation. If there's no context whereby an actual red photon is the same as a simulated one, then you're not simulating a red photon; if there is a context, then there is an equivalence relation. What exactly is there to "accept"?

And I'm not trying to just prove this by definition. It's perfectly possible to simply not have a simulation, and to simply not have a simulated red photon. Given that you do have a simulation, though, and you do have a simulated real photon, you ipso facto have a real thing and an equivalence.
 
Last edited:
What does that mean? Films "run".
A film runs by revealing a sequence of pictures. The content of those pictures does not have a causal relation established by the presentation of them; the only causal relation within the projection of a film is that you're going to be presented what is on the next frame at a particular time, regardless of what is on it.

The simulation that runs, on the other hand, is generating the outcome.
Do you mean that there has to be a time ordering of the different relationships between the components, rather than some other ordering?
No, a causal relation.
That seems quite arbitrary, but in any case, doesn't exclude other representations.
It has nothing to do with the representations. It has to do with the way those representations are produced. As I said before, you can produce a film using causal relations like this--it can be a film of a simulation. But in that case, it's the simulation, not the film, that produces the causal relations. The film just shows you whatever is on the next frame.

ETA: Oh, and you did in one post recognize the difference between a simulation and a film. It just has to sink in that the difference here is in fact a critical difference. You seem to be focused on everything but the difference between a simulation and a film that makes the simulation a simulation and the film a film; namely, the causal relations within a simulation that produce an outcome. Those causal relations are relations between real entities, and that's your real environment.

Come on... just digest it already. Everything is there. Get your mind out of representations and validity and other red herrings, and instead just look at what is there and what it is doing. Yeah, that! The actual entities causing things to happen... see it yet? If you don't, keep referring back to why you would dare call a thing a simulation in the first place. It'll sink in eventually, if you let it.

But it's okay if it doesn't sink in too. We'll continually point to those entities, and the fact that they are directly implied to exist by the fact that we have a simulation. And that they aren't there for the film. And you'll keep saying that we haven't demonstrated the difference for the film, even though this is entirely true about the simulation and the film. And we'll just keep going for another few thousands of posts.
 
Last edited:
But it doesn't matter if you have the specific implementation and nobody believes you because it is so drastically different from what they think of as consciousness.
But that's the thing--it does matter. I'm refuting an argument, not making one. If they want to believe the thing isn't conscious, that's up to them.
It doesn't matter to anyone, not even me ( and this is my profession ), if a machine has agency and a sense of it's agency yet the machine is so simple as to be useless as a conscious entity.
But I'm not trying to convince someone I have a conscious entity--we're talking about what is necessary for now, not what is sufficient. Punshhh is saying that he thinks being is a necessary condition for consciousness. And I'm trying to tell him that X is a necessary condition for his being.
 
What does that mean? Films "run". Do you mean that there has to be a time ordering of the different relationships between the components, rather than some other ordering? That seems quite arbitrary, but in any case, doesn't exclude other representations.

Films only "run" in the sense that individual still pictures are being shown to you in quick succession. No program is being "run" in any sense of the word, and there are no objects or entities on the movie.

Please make a minimum of effort, westprog.
 
Last edited:
What I find most impressive is the idea of having a slew of percept aggregators that sort grab simpler percepts and put them together into more complex percepts, without any instruction from the core logic. That is a very interesting idea.
Yes, it is the architecture that caught my attention. It's something I'd thought about trying to tackle several times over the years, but I really didn't have the motivation to spend the amount of time and effort I knew it would take to design.

However much of the rest of it is vague, I don't know if that is because they didn't want to get specific with stuff or if they actually didn't adhere to these novel concepts as tightly as they claim. For example I would really like to know how they represented their "mission percepts" and furthermore how "mission goals" are chosen and ranked, I suspect with straightforward code like the stuff I use everyday.
From the little I could glean from the article, and the FPS gaming environment they were 'aiming' at, I suspect their implementations were just a few trivially simple methods/functions. At this stage they're just seeing if the model and architecture hangs together and 'works'. It looks to me like this kind of architecture could be extensible enough to support a large number of more complex routines using parallel threads/processors.

In this respect I agree partially with Leumas that machine consciousness similar to ours will probably require an infrastructure built for inference so that all of this stuff is implicit rather than explicit. By that I mean data structures and algorithms that do nothing but support logical inference, and the system itself needs to either learn the rules and which rules are of prime importance, or else it needs to be "loaded" with an already learned instance.
Yes, I see this architecture as a promising framework on which a far more sophisticated intelligent agent could be built. I would be surprised if recognisable consciousness arose without specific attention to providing such an agent with the additional features believed to underly higher-level consciousness (e.g. internal model of self, narrative generator, etc).
 
You don't get anything new by adding the observation that you can't in practice speed up or slow down a human brain except relativistically.
Of course, this is being done all the time with human consciousness, albeit for very small temporal differentials, but even at noticeably relativistic speeds, a consciousness would have no trouble interacting with its local environment (its co-moving frame), despite appearing to be clocked at a very much slower or faster rate relative to a distant observer. I don't see this as being fundamentally different from a fast or slow-clocked artificial consciousness interacting with a similarly clocked virtual environment.
 
"approximate a living mind".

...If I ignore the alive bit then a mind is ...
If you ignore 'the alive bit', you're not answering the question. If you can't explain what you mean, why say it?

"with a sense of being and experience in the physical world".

Well if I put being to one side then I do mean consciously self aware.
See above.

Being is my magic bean and is where I feel inclined to consider the possibility of alternative ontologies to physical matter materialism.
Magic beans == woo.
 
Status
Not open for further replies.

Back
Top Bottom