Explain consciousness to the layman.

Status
Not open for further replies.
But a real, human level intelligence, implemented in code? In a manner we could understand, like... like printing out a blueprint? That would be amazing. No other word fits. Just amazing.

The logic of The Selfish Gene is that when genes are replicators rather than genotypes, we get a different moral landscape than the one we might expect - one that includes empathy and altruism. I don't think anyone has thought through the sort of moral landscape that might arise if genes are no longer the replicator. It's quite possible that humans might, in our unfailing blind arrogance, not only create machines with artificial intelligence, but also with artificial stupidity.
 
post #4472 rocketdodger

I missed this one yesterday - think you must have been editing it. Interesting info. However, the reason I bring up the issue of paradoxes is a theory that the binary bits of 0,1 significantly miss the oscillatory potential of four value bits, significantly, the paradox value i. Qubits are possibly closer to this, but I'm not sure if they really represent the 0,1,-1,i values that maybe exist in human cognition. I could be wrong.
Yes. For that you'll need two bits.
 
…yeah of course Scott…science explains everything…except what this universe is, where it comes from, what you are, or where you come from. Trivial stuff really.

Is this a thread about spirituality / mysticism / revelations etc. etc? Clearly not! (did I not point that out...VERY CLEARLY!). Another typical skeptic who just can’t stomach such words without experiencing a paroxysm of offense. You have now successfully joined the ranks of those whose intelligence I have not the slightest respect for. Don’t bother replying because I most certainly will not be. Adios.

I feel your pain, and I'm impressed by the straw man you've created of me. He sounds like a total butthead.

Whether or not there's a metaphysical component to consciousness is an important part of how it's explained to laymen. I just happen to believe there's no metaphysical component and that there's no evidence for it. When I invited you to offer specific evidence, you responded with a tantrum, an attack on my intelligence, and intent to ignore me. Typical woo.

Here's a thought on consciousness and the brain:

Assume our minds are metaphysical and can be separated and sent to heaven.

There is a specific module of the brain that is responsible for recognizing animals. When that module is damaged in people, such as by cancer or injury, they lose the ability to identify animals, even though they retain the ability to identify anything non-animal. Show them an animal and they have no clue what it is, unless they notice that, say it has whiskers and orange and black stripes, it just MIGHT be a tiger.

If we die and go to heaven, we no longer have the benefits of that physical animal identification module, then left in our dead gray matter. So, I'm just wondering -- will we not be able to identify animals in heaven? The question, I think, illuminates issues on the nature of consciousness.

Of course, a disembodied mind would lose an uncountable number of useful brain modules -- those wet neural networks of cellular machinery. We might not even be conscious after our brains die. Wouldn't that be a bummer!
 
Last edited:
I am conscious that this thread is getting tad personalised. If you can't make your argument without attacking the character of another Member don't post.
Replying to this modbox in thread will be off topic  Posted By: Darat
 
This is a problem AI programmers will have to face. I for one don't think the specialty of humanity hinges on artificial consciousness not being able to be created. Maybe it is our specialness that allows us to be smart enough to create it. So the proposition that "Because we are special beings created by God we will never be able to create artificial consciousness." or that "If we are able to create artificial consciousness then we are blocks of dirt and not special." will never hold any meaning for me. The public needs to see this.

Also the days of consciousness being a complete mystery are over. So this "excuse" for not being able to create it holds no weight. Biological psychology has a lot to say about the operational definition of consciousness.

"One might use definitions that rely on operations in order to avoid the troubles associated with attempting to define things in terms of some intrinsic essence. ... An example of an operational definition might be defining the weight of an object in terms of the numbers that appear when that object is placed on a weighing scale. The weight then, is whatever results from following the (weight) measurement procedure, which should be repeatable by anyone. This is in contrast to Operationalization that uses theoretical definitions." http://en.wikipedia.org/wiki/Operational_definition

So the essence of consciousness may not be known and left unknown while still coming up with a useful operational definition for purposes of creating it in another substrate.

"The nature of consciousness is like that of a software process. It is immaterial but possible to be generated by any physical substrate able to perform the required computation." -- Leading AI expert Raul Arrabales Moreno

Look at how biological psychology operationally defines consciousness, and all they have to say about it:

"Consciousness is difficult to define, but for practical purposes researchers use this operational definition: If a cooperative person reports the presence of one stimuli and cannot report the presence of the second stimulus, then he or she was conscious of the first and not of the second." ... By this definition consciousness is almost synonymous with attention." -- Biological Psychology By James W. Kalat
http://books.google.com/books?id=Zl...=operational definition consciousness&f=false

So that's what the layman should probably be told, and keep in mind.

Ooh look! A definition of consciousness for the layman! Yay! And it only took four thousand five hundred and thirty eight posts!

I bet Data would have got us there sooner, like pushing potatoes through a sponge...
 
Your suggestion that the majority recognize the difficulties of instantiating human identity into AI can only be regarded as at best naive. There would appear to be an enormous degree of not just ignorance, but outright misinformation amongst the general population.
Perhaps I should have been clearer; I was talking about people involved in or with an interest in this technology. If you're concerned about ignorance and misinformation of hoi polloi, the answer is to provide them with information and education. Have a word with the educational establishment and the media.

Reducing human identity to intelligible / manageable AI components encourages a distorted impression of human nature…especially amongst the less educated or gullible (and in the popular media [where a great many derive their information]…which typically reduces everything to the lowest common denominator and simplest possible explanation / description).
For some reason this reminds me of early church railing against science, and their implicit assumption of superiority as guardians 'protecting the souls of the ignorant and gullible from this dangerous information'. I think there is no 'reduction' of human identity going on, and that you underestimate the public. Naturally we, as a society, should attempt educate the ignorant and protect the gullible.

... When / if the AGI time-bomb ever explodes, human beings had better be very clear about who and what they are, or those predicting a catastrophe may very well be right.
Most people involved would agree there are potential risks with powerful technologies, and there have already been numerous discussions, meetings, seminars, books, and articles about the rewards and risks of AI, and how we can maximize the one while minimizing the other.

The risks of man's creations turning malign are high in the public imagination, and always have been - the hubris theme is a popular one in fiction, from golems to Frankenstein, from Big Brother and Terminator to grey goo. A Henny Penny (Chicken Little) arm-waving hysteria is not a mature or sensible way to deal with these issues.
 
The logic of The Selfish Gene is that when genes are replicators rather than genotypes, we get a different moral landscape than the one we might expect - one that includes empathy and altruism. I don't think anyone has thought through the sort of moral landscape that might arise if genes are no longer the replicator. It's quite possible that humans might, in our unfailing blind arrogance, not only create machines with artificial intelligence, but also with artificial stupidity.
So we should not look up at the sky, lest it fall on us.
 
And what makes the computationalists lash out* at the people who hurt them? That's easy. It's the fear that maybe there is something special, something unique, something unprecedented about life and humanity. Something not reproduced anywhere else. It's never so much about evidence and science - it's about a need and a belief that human beings can't be special.

*And check this and other threads to see who's making the personal attacks, who's coming up with the ad homs, who's constantly looking for an agenda.

I have no issue with humans being special as long as it has nothing to do with skydaddy.

In fact, lets be perfectly clear -- the history of this debate has been ME trying to argue that humans ( and life and computers ) are special compared to other stuff because of only how we behave, while YOU have consistently rejected such arguments, trying to push the idea that humans and life can ONLY be special in a certain way, a way that you can't currently define.

Do you deny that your position is basically that the behavior of humans and bowls of soup is effectively the same from an objective standpoint, and that only intelligent subjective based metrics ( I.E. those of another human, or a God ) can be used to differentiate the two sets of behaviors?
 
Last edited:
How does AI human identity instantiation trivialize humanity (intentionally or not)?

What you don't seem to realize is that this is an utterly pointless question at our current stage.

When humans stop raping, killing, stealing, torturing, mutilating ... and any other nasty verb ... each other, I.E. trivializing the humanity of each other, then maybe it will be important to consider what strong A.I. will mean.

The fact of that matter is, people are evolved monkeys ( whether you like it or not ) and we act like it -- if we don't like someone, if they are not in our monkeysphere, we don't see them as human "like us" anyway. We just call them human because the rational part of our brain knows they are the same species as us based on the metric that we can mate with them and have children.

So all this this "but we know someone is conscious like us because they are a human being" is utter garbage. I don't know that the Amazon Indian people, who shoot arrows at monkeys in the trees and put spikes through their nose, are conscious like me. In fact I would have more confidence that a robot could be conscious like me than I do that those other humans are conscious like me, yet I know they are humans. I certainly have more confidence that you are conscious like me than an Amazon Indian is conscious like me, because you speak the same language as I do and our cultural behavior is probably much more similar.

How can one explain such a sentiment, if all humans respect all other humans as equivalent conscious entities? You can't, because we don't.
 
Last edited:
I would suggest that anyone who is convinced that an encounter with a human level machine intelligence (or as it’s also known… AGI…artificial general intelligence) would be anything but fundamentally disturbing is, quite simply, lying to themselves.

It depends.

I can see that a person, having grown to full adulthood and never seen their own image in a photo, mirror, reflection from water, whatever, might be fundamentally disturbed ( at least transiently ) when they did look in a mirror for the first time.

However one that grows up around photos, mirrors, etc, and is used to seeing that, isn't so traumatized when they look in the mirror for the first time after becoming self aware. It is just another "oh, that thing I have been seeing in the mirror all this time is me, I'll be darned."

Likewise, I don't expect to be disturbed when I finally run into an AGI. Certainly not if I am the one that developed it! Because for years I have been slowly realizing that actually I am just a biological neural network embodied in a mass of chemical machinery, and by now I am very comfortable with the idea.

In fact I find it extremely enlightening to know exactly what I am ( or at least think I know ), I imagine this is how religious people feel when they are similarly enlightened. So if anything coming face to face with an AGI will just confirm that this knowledge I had of myself is indeed correct, and I bet that will feel pretty good.

If anything, I will just be jealous, because an AGI could potentially modify itself, whereas I cannot !!

However, someone who is religious, and thinks they are special because of a spirit that skydaddy gave them, is likely to be very disturbed, I agree.
 
Last edited:
Explanation?

You can represent negative and imaginary numbers in binary code, but it takes two bits to do it. 01 would be positive one, and 11 would be negative one (assuming that you are using positive and negative notation). You can do the same thing with imaginary numbers, with 01 being real and 11 being i.

You can even represent positive and negative imaginary numbers by using three bits.
 
This

http://www.scholarpedia.org/article/Attractor_network

should remove any latent doubts regarding the "no objective definition" nonsense propagated by piggy, westprog, and others.

One thing I didn't know is that associative memory is slightly different from content-addressable memory, depending on the use case, even though many sources treat them as equivalent: using an attractor network to make logical associations between an input and a different output is associative memory while using such a network to find the closest output to a given input is content-addressable memory.

I am currently in the midst of reading this:

http://www.aisb.org.uk/publications/proceedings/aisb05/7_MachConsc_Final.pdf

which is seven years old yet contains information I didn't even know we humans knew.

Basically, all the non-computationalists on this thread are just so wrong it isn't even funny anymore.
 
This

http://www.scholarpedia.org/article/Attractor_network

should remove any latent doubts regarding the "no objective definition" nonsense propagated by piggy, westprog, and others.
...
I am currently in the midst of reading this:

http://www.aisb.org.uk/publications/proceedings/aisb05/7_MachConsc_Final.pdf...
Interesting links, thanks. For those who may be interested, the 'Attractor Network' article is part of the Encyclopedia of Computational Intelligence.
 
Caution and concern are not the same as feeling disturbed.

In other news, "we don't know what a human being is" is complete nonsense. We know exactly what humans are. We don't know everything about humans, but we do know what humans are, in the same way that we don't know everything that's in the ocean but we know what the ocean is.

It's a fine distinction, but it's an important one for the purpose of this discussion, so I figured someone should point it out.


Talk about déjà vu. True to form as ever Argent. Is it just me, or is there some kind of contradiction between the statement ‘we know exactly what we are’ and the statement ‘we don’t know everything about humans’. So is it ‘complete nonsense’, (like the last time you accused me of being…what was it…’completely ignorant’…or something)…or is it just partial nonsense. I guess you haven’t been following this thread too long. If you go back a couple dozen pages you’ll arrive at a quote that appears to represent the current consensus in the international cog sci community (remember Geraint Rees …it comes directly from his bunch). I’ll summarize it for you: ‘ We don’t know what consciousness is and we don’t know how the brain creates it.’ Pretty much their words exactly. If you want to believe otherwise, go right ahead. Maybe Pixy will be supporting you again, maybe not (hell, Pixy even believes in free will now…don’t you Pixy…or was that ‘maybe’ just maybe?).

So we have the words ‘don’t know what it is and don’t know how it’s created’. Sure doesn’t seem to be a lot of ambiguity there. Of course there is a great deal understood about various aspects of people and how they work…but there also seems to be a great deal that isn’t. The summary of the situation would appear to be exactly as Chomsky described it: “ Our understanding is thin and likely to remain so.” If anything, it would seem that it’s the computationlists (and related perspectives) who are the neurotic ones (…”humans aren’t special …we know what we are… “…just does not seem to be [for lack of a better word] true).

But useful just the same. A clear example of the contradiction explicit in that question I asked (which, as yet, remains unanswered [just a note Scott….I said I’ve no interest in your BS and I meant it…if you want to waste your time responding to my posts…go right ahead…I don’t bother anymore with juvenile skeptics who can’t handle a simple mystical reference without becoming apoplectic]).

On the one hand, the international cog sci community is unequivocally clear that ‘we do not know what consciousness is or how it is created’ (…come to think of it, it’s a bit of a paradox for the thing that is making the statement to attempt to describe ‘its’ own ignorance of ‘its’ own ability to admit ‘its’ own ignorance of the ability to admit ‘its’ own ignorance….or something like that).

On the other hand…this seems to flatly contradict our everyday experience. Like ‘…whaddya mean, I don’t know what I am!…I don’t feel like I don’t know what I am…quite the opposite…’ So we don’t know what we are…but we do know what we are. What’s the explanation for this?...and how might it be relevant to any attempt to satisfy the OP?


Quite the contrary, I'm having trouble thinking of many things I'd find more absolutely fascinating. Alien life would be one. Intelligent alien life, certainly. But a real, human level intelligence, implemented in code? In a manner we could understand, like... like printing out a blueprint? That would be amazing. No other word fits. Just amazing.

I'm sorry the prospect scares you. I truly am.


Scares me?!?!…it doesn’t ‘scare’ me. It’s academic until it isn’t. What is amazing is just how amazingly simplistic the perspectives on this issue often are (it reminds me of the debates that often surround such issues as euthanasia). The uncertainty level goes right off the charts….on every available social and psychological metric. ‘ In a manner “we” would understand’?!?!?!? Which ‘we’ are you talking about? You…me…Justin Beiber …Bill Gates…the president…? What if it occurred as a result of some unexpected anomaly (has science ever progressed in such ways before…no, of course not). All of a sudden there exists this ‘thing’ that has the capacity to conclude that it has the capacity to reach conclusions of its own. What if “we” don’t understand it (hell, we don’t even remotely understand ourselves, why this blind faith automatic assumption that some fictional ‘we’ would ‘understand’ something on that order of sophistication). It could (and almost inevitably would) conceivably create its own paradigms of behavior which would mean…what? You function as a result of massive intuitive assumptions about the coherence and robustness of your conceptual framework. If these frameworks are suddenly challenged by a fundamentally different one, which will prevail (just how disorienting can disorienting be?)? You may suddenly discover just how fragile you actually are when another paradigm asserts its own conditions of being (maybe ‘they’ would decide that people like you wouldn’t be allowed anywhere near ‘it’ for that very reason…would it still be amazing then?). But all of this is rampant speculation. But no, it isn’t. This is what we are…what the cog sci community is currently attempting to adjudicate the reality of…and there does exist…on some perhaps distant horizon…an HLMI (and it’s currently being reverse-engineered through AI so it’s hardly irrelevant). What ‘it’ will do, or be, nobody knows…partly because nobody (?) is yet clear about what we can do, or be.

Look at how biological psychology operationally defines consciousness, and all they have to say about it:

So that's what the layman should probably be told, and keep in mind.


The impression I get is that the cog sci community has a lot of different perspectives on the issues of what it’s dealing with. One point is…what actually is it that the public needs to see…and who is going to make these decisions?

These are not unlike the ethical / moral issues that the bio-medical community faces regarding questions like conception (how much choice should there be?), suffering, life-extending practices, euthanasia, etc. etc. From what I understand, many medical schools include mandatory courses on ethics in their curriculums. I wonder if the same can be said of computer science? From what I can see, this is not the case. Should it be?


I have no issue with humans being special as long as it has nothing to do with skydaddy.

In fact, lets be perfectly clear -- the history of this debate has been ME trying to argue that humans ( and life and computers ) are special compared to other stuff because of only how we behave, while YOU have consistently rejected such arguments, trying to push the idea that humans and life can ONLY be special in a certain way, a way that you can't currently define.


But the simple and unavoidable fact seems to be that we cannot, in fact, define whatever it is that makes us special. You can argue the details (behavior etc.) until you’re blue in the face, but the issues are unresolved. To whatever degree anything can be indisputable, that most certainly is. When there are issues of this magnitude that exist to such a degree of uncertainty, dismissing possibilities for no other reason than you find them distasteful is, at the very least, premature.

As for your certainty that you would be uber-cool when facing down an AGI for the first time….to me that simply betrays an admission of deceit. Have a look at my final comment (somewhere way down below). How can you possibly claim to know exactly what you are….when the blatant scientific consensus is that there does not exist an understanding of what you are. Either you’re lying, or you’re deluded, or you know something the international cog sci community does not know (or [gasp], you’re betraying religious behavior). The fact that you find your condition ‘enlightening’ is telling though. Enlightening. Why do you suppose people prefer that condition…even if they’re wrong about what they’re enlightened about (as you must unavoidably be)? I hate to say it Dodger…but you’re beginning to sound positively religious.

Ooh look! A definition of consciousness for the layman! Yay! And it only took four thousand five hundred and thirty eight posts!


I wonder if it takes a degree in biological psychology to conclude that if a person says they see a tree then they are experiencing the seeing of a tree?


Perhaps I should have been clearer; I was talking about people involved in or with an interest in this technology. If you're concerned about ignorance and misinformation of hoi polloi, the answer is to provide them with information and education. Have a word with the educational establishment and the media.


For some reason this reminds me of early church railing against science, and their implicit assumption of superiority as guardians 'protecting the souls of the ignorant and gullible from this dangerous information'. I think there is no 'reduction' of human identity going on, and that you underestimate the public. Naturally we, as a society, should attempt educate the ignorant and protect the gullible.


Most people involved would agree there are potential risks with powerful technologies, and there have already been numerous discussions, meetings, seminars, books, and articles about the rewards and risks of AI, and how we can maximize the one while minimizing the other.

The risks of man's creations turning malign are high in the public imagination, and always have been - the hubris theme is a popular one in fiction, from golems to Frankenstein, from Big Brother and Terminator to grey goo. A Henny Penny (Chicken Little) arm-waving hysteria is not a mature or sensible way to deal with these issues.


…so what is?... (and I don’t quite see my position as arm-waving hysteria…when I advocate smashing computers and burning AI researchers at the stake…that’s arm-waving hysteria)

Quite obviously….the ‘hoi paloi’, as you call them, have a significantly distorted understanding of what AI is (as is apparent from the blatant disparity between the results of those survey’s). There are some interesting papers about the cognitive biases inherent in this issue (and what may explain them). The results of those surveys quite clearly illustrate exactly the point I was making…the simplification / trivialization / commodification of human nature results in distortions of understanding (individual and collective). “ Human intelligence…nothing to it…we’ll be seeing R2D2 at Sears in my lifetime (but why can’t I understand why my teenage son is so depressed????)! “ I’m certainly not laying the blame at the feet of AI but it can encourage a trend.

Your confidence in the capacity of the general public to accurately adjudicate the situation is touching but there is little doubt that if another poll were taken tomorrow based on the assumption of the imminent introduction of full AGI (‘Data’ with all the trimmings….@ WalMart for $49.95)… in all likelihood the results would be overwhelmingly positive (of course…you might also have to deal with an equally irrational backlash response…depends which way the winds are blowing). Should the ‘general public’ decide such issues (they certainly don’t seem too knowledgeable if the results of those surveys are any example)? If not…who? Who has the authority, understanding, insight, or wisdom (aka: humanity)….or are any of these even necessary?

As I pointed out earlier…the bio-medical community include moral / ethical considerations in many of their activities….and many countries now practice legislative oversight of medical procedures…especially those involving reproductive issues (precisely because there are fundamental social, moral, and ethical issues involved). Reproductive issues…isn’t AGI a reproductive issue? Should the same practice be considered for AI or should we just let the obviously well-educated general public decide?

It’s actually amusing to see how many here are convinced that mine is an alarmist position…chicken little and all that. But ask for some kind of coherent explanation of why…and we get…faith. Things will work out, nothing bad will happen, people know what they’re doing, the world is a good place, it’s not complicated, etc. etc. Impressive.

…and in closing (because all good things must come to an end)…just speaking hypothetically of course…if anyone were to encounter a full AGI (I would include myself amongst the ranks of those in that first survey who are convinced that AGI is not possible…but for the sake of argument…) you would be encountering a ‘thing’ that actually knows what it means to be a human being (that’s what AGI means…complete human level intelligence). How many human beings do you know that can conclusively say they unconditionally know what it means to be a human being (nobody here can even provide a simple description of consciousness…the very thing that we all are!)? Thought so. Now tell me you wouldn’t find it disturbing (wait for it….).
 
Explanation?
What you are describing - 0, 1, -1, i - is known as four-valued logicWP. It's not as widely used in computing as three-valued logicWP, but it's dead easy to implement, if sometimes difficult to fully wrap your head around.

While there are exactly 16 distinct two-input binary logical functions, there are 19,683 for three-valued logic and 4,294,967,296 for four-valued logic. With two-valued logic it's easy to list them all and assign them meaningful names, but with three and four-valued logic you just can't.

But the point is, binary computers can and do work with many-valued logic, they just use multiple bits to represent a single logical state. Four four-valued logic, you need two bits.
 
Talk about déjà vu. True to form as ever Argent.

Same to you.

Is it just me, or is there some kind of contradiction between the statement ‘we know exactly what we are’ and the statement ‘we don’t know everything about humans’.

It's just you.

I’ll summarize it for you: ‘ We don’t know what consciousness is and we don’t know how the brain creates it.’ Pretty much their words exactly.

And I'll summarize it for you: one quote, taken without any context whatsoever, and from several years ago, does not "accurately depict the scientific consensus". You have just been given several links, in fact, which explain pretty much exactly what it is.

But the simple and unavoidable fact seems to be that we cannot, in fact, define whatever it is that makes us special.

No one has shown that there is anything that makes us special.
 
And I'll summarize it for you: one quote, taken without any context whatsoever, and from several years ago, does not "accurately depict the scientific consensus". You have just been given several links, in fact, which explain pretty much exactly what it is.


So for old times sake I’ll indulge you. Haven’t changed a bit. I guess you really didn’t bother to check the thread. That quote was reviewed barely four weeks ago by quite a range of currently active cognitive scientists…including a few of its authors. Every one of them fully agreed that it was still not only entirely accurate but that it does, in fact, represent the consensus of the cog sci community. Do I have the emails? Yes. Am I going to show them to you? No. My need for your approval ranks somewhere below my desire to experience dementia.

…next question

No one has shown that there is anything that makes us special.


Scott Heutel (professor of neuroscience and director of institute for brain science…Duke University): "The human brain is the most complex object in the known universe."
Dan Dennet (..etc. etc.): "Consciousness is the last remaining mystery."

…but apart from that, nothing special. You also seem to contradict what RocketDodger has had to say about the issue…but I’ll let you take that up with him.

In future Argent, just assume I have not the slightest interest in your opinion or your conclusions. I would rather have sex with my rug. Reply if you want. I won’t be.
 
So for old times sake I’ll indulge you. Haven’t changed a bit. I guess you really didn’t bother to check the thread. That quote was reviewed barely four weeks ago by quite a range of currently active cognitive scientists…including a few of its authors. Every one of them fully agreed that it was still not only entirely accurate but that it does, in fact, represent the consensus of the cog sci community. Do I have the emails? Yes. Am I going to show them to you? No. My need for your approval ranks somewhere below my desire to experience dementia.

…next question




Scott Heutel (professor of neuroscience and director of institute for brain science…Duke University): "The human brain is the most complex object in the known universe."Dan Dennet (..etc. etc.): "Consciousness is the last remaining mystery."

…but apart from that, nothing special. You also seem to contradict what RocketDodger has had to say about the issue…but I’ll let you take that up with him.

In future Argent, just assume I have not the slightest interest in your opinion or your conclusions. I would rather have sex with my rug. Reply if you want. I won’t be.

Says the human brain.
 
Status
Not open for further replies.

Back
Top Bottom