• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Explain consciousness to the layman.

Status
Not open for further replies.
First of all, just because code is written using branch instructions does not imply it is a gigantic "if/then" block. Conceptually, the operation of your neurons is the same as an "if/then" statement, yet multiple neurons acting together certainly produce results that you would not consider a gigantic "if/then" block.
Nor did I mention explicitly nor implicitly a 'gigantic "if/then" block'. Parallel processing is possible and not a new concept.

So, a trained artificial neural network is NOT a look-up table or if/then block or a set of database lookups. Information flows through an artificial neural network exactly like it does through a biological one -- if you tried to look at an individual neuron and make sense of it, you would have no chance. There isn't some variable in the neural network code that corresponds to anything you could make sense of at the level of the entire system.

Furthermore, you can't train neural networks by programming them. You program the code for the nodes ( neurons ), throw a bunch of them together, and then supply the system with repeated instances of something it is supposed to react to ( more or less ). What the system does when it "learns" is beyond your understanding -- the edge weights and behavior of the individual nodes is now out of anyone's hands. At this point it is genuinely no longer just "programmed" because the programmer has literally no idea whatsoever what any of the edge weights or individual node behavior means.
Thanks. IIRC neural nets need to be trained by repeated data inputs to set specific variable values. Whether by node or by multiple nodes I don't know.

I don't know that Watson makes use of neural networks, though. I was just elaborating on one issue I saw with your language.

I can't find much information on the exact software architecture used by Watson, but based on some diagrams ( like the one wikipedia has ) for the high level flow I can guess how I myself would have programmed it to function if my boss told me to use that flow. However it will take some time to get it all together, gimme a bit.
No problem. I'm happy with what you are providing here. Others might appreciate more from you though.

"multiple drafts" refers to the idea that instead of an "if/then" type information flow, you have sub-modules that add their little "piece" to a "draft" representing either an incoming percept or an outgoing action ( which can be totally "imagined," meaning the percept doesn't come from external reality and the action doesn't go to external reality but rather come from/go to somewhere internal to the system ) that is then "evaluated" by the "core" module of the system, which in terms of consciousness is our conscious train of thought. This "core" does something with the "draft," adds "feedback" to the draft, and another iteration is performed. Thus this paradigm treats consciousness as a "core" that is "aware" of only the "drafts" being given to it, constantly churning over them, and spitting them back into the lower levels of the system with some feedback.
Ok, that makes sense to me. Thanks again.

"global workspace" refers to the idea that instead of an "if/then" information flow, you have sub-modules with access to a shared workspace of information, and when they see something that "interests" them they act on the information and then spit a modified version of it back into the workspace. There is also a "core" module, which again is the "conscious awareness" part, that monitors the shared workspace for anything it finds interesting. When such a thing is there, it becomes aware of it, does something to it, and spits it back into the shared workspace.

Note that in both of these models the "core" isn't necessarily something "core" to the functioning of the system, it could also just be whatever is "consciously aware" of stuff. It is obvious that we humans can do many complex tasks without really being conscious of them, or certainly that the actions in those tasks are not front and center in our conscious train of thought.

The fascinating thing about both of these models is that 1) they work really well with neural networks and 2) they align very well with the way human thought seems to occur.

For example, in the global workspace model, the following information flow makes sense:
0) someone turns around and there is a person there, so photons from that face hit the retina of the observer
1) percepts come in from the world and get placed in the global workspace after being filtered through our visual processing system
2) sub-modules look at the individual percepts and see if they can "aggregate" them into more complex percepts, if they can, they spit those more complex "aggregated" percepts into the workspace for other modules to then use.
3) at some point a sequence of sub-modules has built up a piece of information that the core module might recognize as "a person." Since this is sitting there in the global workspace the core module eventually recognizes that information.
4) the core module reacts to the "face" information somehow, perhaps by injecting information into the shared workspace that the body should respond to the face
5) sub-modules look at this "should respond" action and decompose it into possible actions, like speaking, waving a hand, whatever, and inject information about those smaller actions back into the shared workspace
6) maybe the core sees the possible responses and weights one higher than others, or maybe this is done by some other sub-module, in any case, a winning action is selected and put back into the shared workspace
7) this winning action is further decomposed into atomic actions by other sub-modules and the information is sent to the body, where it reacts properly ( by speaking or waving a hand or whatever ).

Now think about just how close this is to our mechanism of operation -- when you see a person, do you see the individual parts of their body? No, of course not, your conscious awareness just recognizes them as a person in one fell swoop. Where did that aggregation come from? And when you respond to seeing a person, do you consciously decide what shapes your larynx should make as you vocalize? Do you decide how to move your arm to wave? Of course not, you just sort of "do" it and some sub-levels of your brain that don't normally pay attention to decompose those high level actions into the atomic steps that your body should take to satisfy them. Do you even decide to "respond" when someone says "hello" or is it sort of automatic? This model accounts for why all of those things happen the way they do.

Finally, "associative mapping" refers to associative memory, which is formally known as content-addressable memory. This is memory that instead of addressing it by address, like in computer memory, you give it some piece of "content" and it "gives back" the thing it remembers that is "closest" to the input. Not surprisingly, neural networks use content-addressable memory.

Some fascinating things about such neural networks ( you can google "hopfield network" since that is the most famous kind ) -- the closer the "input" is to the actual "memory" the faster the system will converge on that memory ( sounds like your memory, doesn't it ? ) and the more "memories" the system has been trained on the longer it takes to converge ( sounds like your memory, doesn't it ? ) and finally if the system is loaded with memories past some threshold number ( which is related to how many nodes the network has ) there is a higher chance of either converging on the wrong memory or not converging at all ( sounds like your memory, doesn't it ? ).

One final note, which you hopefully find extremely fascinating, is that you can use content-addressable memory to perform a sort of implicit logical inference by varying the input. For example, when someone asks you "name some things that are red," what is the process your mind uses? I won't poison the question, but what you can do with content-addressable memory is first just throw "red" at it and see what comes back, and then start throwing multiple things at it, like "red vehicle" or "red fruit" or "red painting" and see what comes back, and then go even further with things like "red rothko painting" or "red sunset," and when you put stuff like that in, the system might spit back out memories such as "I saw that red rothko painting at the Dallas museum of art" or "the prettiest sunset I ever saw was when I got that flat tire near El Paso" because those things are actual memories in the system.

I say that is "implicit inference" because logically the state "red car" does have a connection to "that red audi I saw yesterday," and you can easily answer logical questions like "have you ever seen a green audi" by just coming up with zero memories of such a thing. No, I have never seen a green audi -- and I didn't even need to parse the sentence like a computer such as Watson would need to parse it, I can rely on my associative memory to give me the logically correct answer.
Thanks again for helping me out. I plead wrongness on my table-lookup idea.

As an aside, a tv show on brain function implied strongly that only a few neurons were needed to store face images, and that's why face recognition is often very rapidly accomplished by humans.
 
Exactly! A perfect simulation of the brain is not the brain. It's a computer. No one has offered an explanation of why it wouldn't be conscious without resorting to metaphysics.

I'm still waiting for !kaggen to explain why she denies the ribosome is a machine.

Because that's not how it works. Somebody needs to explain why something that doesn't do what the brain does will be conscious like the brain. It's not up to anybody to come up with reasons why something shouldn't be conscious. It's up to the people promoting the idea to show why this particular simulation produces the actual thing itself.

Using the ill-defined nature of consciousness as a way to work around the problem is unconvincing. Not fully comprehending a phenomenon is not an opportunity to reproduce it - it's an obstacle in reproducing it.
 
No it doesn't, and it isn't. You just keep digging that hole.



No you didn't. AGAIN you've completely lost track of the conversation. This is something very common with you, ever since your first few posts on this forum.

I've noticed that you're very quick to claim that when people depart from the destination you think appropriate for the discussion, that they are "missing the point", that they are confused, that they are losing track. Often this is because they are failing to fall in line and accepting what you think are compelling arguments.

For some reason, you are determined to chase up this particular comment about a particular way of arguing. It's clearly a diversion from the main thread, but since the main thread is just spinning its wheels and will do so forever (as was predicted on page one) we might as well argue about the way of arguing about the arguments.
 
Exactly what is it about machines or computers that makes them incapable of experiencing consciousness or feelings?

So - why is this an argument from ignorance? (Sorry for Mr Scott, who's probably long forgotten this post, but if it keeps getting dragged up...)

Firstly - isn't it just a question? Yes, it is - but it's a rhetorical question. The implication is clearly that there isn't anything about computers that makes them incapable of experiencing. Mr Scott isn't asking for information - he's forcefully putting forward a viewpoint.

What is the viewpoint? That it's up to the people who disagree with him to come up with proof that his idea is wrong.

It's certainly wrong it terms of burden of proof, but it also relies on our lack of knowledge about consciousness in order to imply that there's nothing to stop computers having consciousness or feelings. Since we don't know how human beings get consciousness or feelings, we can rely on that ignorance to surmise that computers might get it too. How do they get it? We dunno - BUT PROVE ME WRONG.
 
What is more exciting than machines doing things that humans can already do, however, is enabling humans to be able to do things that at the moment only machines can do. If you can give a machine the capability to compute the best route from a to b in a nanosecond, speak every single known language, see X-rays, detect lying, carbon date a fossil, why not give that capability to a human? That's where Artificial Intelligence will/could really be powerful imho. The human/AI hybrid.


What is most interesting about giving humans whatever additional capabilities it may be possible to give them (in whatever ways it may be possible to do so)….is what kind of human being results. We saw fictional portrayals of this process in movies like The Matrix where various skills were ‘downloaded’ into human beings (not exactly a hybrid but integration of a kind). The actors then did their best to portray how they would feel upon now being a person with significantly enhanced capabilities (they didn’t seem too impressed…but it was just a movie). Recently we had the movie ‘Limitless’ where the character ingested some kind of drug and acquired significantly enhanced abilities (perception, languages, insight, logic, etc.). He seemed a bit more excited by his new-found skills (if I could do what he could do…I’d be…[…and a strange thing happened on the way to the zoo…a word called truth met a word called freedom and someone opened their eyes…]). Whether the abilities are acquired via temporary integration (Matrix), some kind of permanent hybrid (….Star Trek…the Borg [not a particularly appealing example]), or via some manner of chemical enhancement (Limitless)…it is the psychology (aka: the only truth we actually have access to) that is actually relevant.

However compelling it may be….in reality, it all seems not even remotely speculative. Human beings obviously have the capacity for significantly enhanced abilities in some way shape or form (as is evidenced by the abilities of savants etc). Whether AI integration might somehow ‘trigger’ these abilities or simply augment them can be nothing more than science fiction at this point.
 
As an aside, a tv show on brain function implied strongly that only a few neurons were needed to store face images, and that's why face recognition is often very rapidly accomplished by humans.

Yes, this is correct. More research has been done on animal vision systems than all other parts of the brain combined, so you can bank on it.

I will elaborate tomorrow.
 
The "magic bean" you keep referring to in the case of the PROGRAMMED computer is THE PROGRAMMER.....

You make a salient point. Because what the machine has, perhaps, is our minds 'outsourced'. Of course, it may well be that future computers write programmes for a new generation of machine, but that is just the initial human intelligent input, once removed. It's quite possible to see that future AI machines might move towards a self-sustaining evolution of intelligent machines, but their 'abiogenesis' (for want of a machine equivalent) has started with humans. The issue is then, is where our 'programme' came from. Which leads to a different way of looking at it...

We remove the distinction between human evolution and computer evolution. By making them part of the same linked chain, you see the initial 'programming' (for humans) came from the universe via abiogenesis and the well known (though still not fully understood) processes. The programming via humans for computers then just becomes an extension of this. The one leads to the other. As fish crawled out of the ocean, humanoids (or possibly other 'oids) crawl out of humans.
 
post #4472 rocketdodger

I missed this one yesterday - think you must have been editing it. Interesting info. However, the reason I bring up the issue of paradoxes is a theory that the binary bits of 0,1 significantly miss the oscillatory potential of four value bits, significantly, the paradox value i. Qubits are possibly closer to this, but I'm not sure if they really represent the 0,1,-1,i values that maybe exist in human cognition. I could be wrong. In terms of form influencing function, however, this could be a significant difference.
 
)…it is the psychology (aka: the only truth we actually have access to) that is actually relevant.

Ah, but given human psychology is a result of physical processes, we could change this also. Behaviour is driven by neurohormones (as rewards). For example, the attachment to life, empathy, pain avoidance. We assume these are enduring conditions. I'd say they are (currently), what makes us human. We fear those without all of them intact (psychopaths). But if you start understanding them, there is no reason why you couldn't change them. I have to admit, I find the whole concept unsettling, though fascinating.
 
The "magic bean" you keep referring to in the case of the PROGRAMMED computer is THE PROGRAMMER..... the programmer is the REASON (which you equivocally call magic bean) the programmed computer appears to be "intelligent" just like Punch and Judy appear to be moving and thus fooling SIMPLE CHILDISH minds into thinking that they are animate.

The REASON (not magic anything) that a programmed computer might FOOL some wishful thinkers into believing that it is intelligent is because there is IN FACT A REAL INTELLIGENCE behind it ......THE PROGRAMMER....the programmer is remote controlling (over time) the machine just like Punch is controlled.

The programmed computer appears to be intelligent due to the time-remote-control of the programmer who was clever enough to fool lesser minds into believing that his program is animate.

No, it's different. I'm a computer programmer and I know computers can figure out things on their own.

Does a neuron in a brain understand what its incoming spikes mean, or what its outgoing spikes mean? Does it know it's communicating a color, or a sound, or a face?

Three possibilities:

1) The brain was designed by a creator who understands everything that the brain will be processing, like a computer programer or the person who prepared the lookup table for the Cinese room.

2) The brain evolved naturally to be able to process and learn to "understand" what it inputs from the environment, even though individual neurons and neural networks don't really understand what is being processed (like the guy in the Chinese Room).

3) The brain evolved naturally and aquired a metaphysical attribute of "understanding" which a man-made machine could never achieve.

I don't mean to straw man you or pigeon hole you, but if you could communicate your position in that kind of clear form like I have in 1,2,3 I'm sure everyone here would appreciate it.
 
Last edited:
You make a salient point. Because what the machine has, perhaps, is our minds 'outsourced'. Of course, it may well be that future computers write programmes for a new generation of machine, but that is just the initial human intelligent input, once removed. It's quite possible to see that future AI machines might move towards a self-sustaining evolution of intelligent machines, but their 'abiogenesis' (for want of a machine equivalent) has started with humans. The issue is then, is where our 'programme' came from. Which leads to a different way of looking at it...

We remove the distinction between human evolution and computer evolution. By making them part of the same linked chain, you see the initial 'programming' (for humans) came from the universe via abiogenesis and the well known (though still not fully understood) processes. The programming via humans for computers then just becomes an extension of this. The one leads to the other. As fish crawled out of the ocean, humanoids (or possibly other 'oids) crawl out of humans.

And gods crawled out of humanoids/AI, ad infinitum...
 
Ah, but given human psychology is a result of physical processes, we could change this also. Behaviour is driven by neurohormones (as rewards). For example, the attachment to life, empathy, pain avoidance. We assume these are enduring conditions. I'd say they are (currently), what makes us human. We fear those without all of them intact (psychopaths). But if you start understanding them, there is no reason why you couldn't change them. I have to admit, I find the whole concept unsettling, though fascinating.

In a sense the human body confines us, it is our cage or comfort zone. If it were removed and our minds retained their sentience and the physical world were as plastic as the world of our imagination, it would be an interesting if scary scenario, while being greatly enhanced/accelerated. Best wait until were more highly evolved.
 
The REASON (not magic anything) that a programmed computer might FOOL some wishful thinkers into believing that it is intelligent is because there is IN FACT A REAL INTELLIGENCE behind it ......THE PROGRAMMER....the programmer is remote controlling (over time) the machine just like Punch is controlled.
False. I can program a computer to perform tasks that I don't know how to perform myself. I can program a computer to perform tasks appropriate for situations I did not foresee.
 
Last edited:
False. I can program a computer to perform tasks that I don't know how to perform myself. I can program a computer to perform tasks appropriate for situations I did not foresee.


And the ability of the computer to do these things is directly related to the ability of the programmer to write a program that enables the computer to so operate. No capable programmer…no capable computer.

Is this rocket science or am I missing something?

Can you write a program that would enable a computer to determine an accurate definition for the word ‘consciousness’? That would definitely be something that you would not foresee!
 
And the ability of the computer to do these things is directly related to the ability of the programmer to write a program that enables the computer to so operate. No capable programmer…no capable computer.

Is this rocket science or am I missing something?

Well, that's not necessarily the case. Programmers built a computer that beat human world champions at chess, and another that beat the world champions of Jeopardy. It WAS at the level of "rocket science." We are even designing computers that evolve to do things they were not programmed to do. I found this link about one in only a few seconds. The canard "computers can only do what they are programmed to do" was doomed a long time ago.

You must think no computer could ever be conscious. Ever. Explain exactly why.
 
Well, that's not necessarily the case. Programmers built a computer that beat human world champions at chess, and another that beat the world champions of Jeopardy. It WAS at the level of "rocket science." We are even designing computers that evolve to do things they were not programmed to do. I found this link about one in only a few seconds. The canard "computers can only do what they are programmed to do" was doomed a long time ago.

You must think no computer could ever be conscious. Ever. Explain exactly why.


I thought I would highlight a few relevant words / phrases.

The question is, quite simply, incoherent. We do not know what consciousness is so how on earth can anyone decide if a computer can be it? You may as well ask me if a computer can be God? This nonsense about speculating ‘in principle’ is just that, nonsense. This is a unique case. All the evidence indicates that this is true. Human beings are, in fact, very special…and consciousness is, in fact, a very unique problem....both in ways we are still very far from understanding...which, I think, is part of the issue with many of you. You (I mean, generally speaking, 'your' side of this debate) simply cannot digest a situation where you have to admit all-but-complete scientific ignorance about something that is individually and collectively so fundamental, significant....AND...personal.

I think I presented this rhetorical question before: How do you feel about being something that you have to admit you do not understand (and unless you actually know something the global cog sci community does not, that is...in fact...the case)? It's kind of a trick question, but not entirely irrelevant.

The only thing greater than our ignorance, is our ignorance of our ignorance.
 
Last edited:
I don't really see an issue with the quality of the intelligence of an AI being dependent on the intelligence, skills and abilities of the people that develop it; it seems trivially obvious. The quality of human intelligence is partially dependent on inherited traits. So what?
 
I thought I would highlight a few relevant words / phrases.

The question is, quite simply, incoherent. We do not know what consciousness is so how on earth can anyone decide if a computer can be it? You may as well ask me if a computer can be God? This nonsense about speculating ‘in principle’ is just that, nonsense. This is a unique case. All the evidence indicates that this is true. Human beings are, in fact, very special…and consciousness is, in fact, a very unique problem....both in ways we are still very far from understanding...which, I think, is part of the issue with many of you. You (I mean, generally speaking, 'your' side of this debate) simply cannot digest a situation where you have to admit all-but-complete scientific ignorance about something that is individually and collectively so fundamental, significant....AND...personal.

I think I presented this rhetorical question before: How do you feel about being something that you have to admit you do not understand (and unless you actually know something the global cog sci community does not, that is...in fact...the case)? It's kind of a trick question, but not entirely irrelevant.

The only thing greater than our ignorance, is our ignorance of our ignorance.

Science progresses. It is figuring out more and more stuff that seemed mysterious before, every day. We know a lot about how consciousness works already. Much more than you suspect.

If you think there is a metaphysical, incomputable aspect to it, come out of the closet and say so (and explain why you are so sure).
 
I don't really see an issue with the quality of the intelligence of an AI being dependent on the intelligence, skills and abilities of the people that develop it; it seems trivially obvious. The quality of human intelligence is partially dependent on inherited traits. So what?

We don't consider books as being intelligent in and of themselves, but they are certainly capable of amplifying human intelligence. Using technology to amplify human's capacities is - actually, that's what technology is.
 
Status
Not open for further replies.

Back
Top Bottom