• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Explain consciousness to the layman.

Status
Not open for further replies.
Ok, I misspoke then.

However I have to wonder why, if you are an A.I. programmer, you ever believed that a syntactic chatbot understood anything.

The minute I learned about true reasoning in the first A.I. course I took I knew that a chatbot was just smoke and mirrors.

You have jumped to an unwarranted conclusion. Who says I ever believed chatbots had true cognitive thought? I didn't know how they were programmed but was shocked by their claims of "understanding English" and giving machines "the power of thought". I didn't believe it but needed a way to prove it, to even non-programming simpletons. Thus I came up with the Syntactic BS Detector Test. I did this via the field of semiotics picked up via the Searle Room experiment. It appears many programmers (and the general public) have failed to do this and are being fooled by syntactic-only machines.
 
But if these syntactic-semantic machines fail the Syntactic BS Detector Test as bad as the modern chatbots do (the modern day examples of "thinking machines") then they also are not truly thinking i.e., they don't understand a damn word of what you are saying.

I don't understand why you consider chatbots as the flagship example of thinking machines.

Have you watched the discovery channel in, oh I dunno, the last 20 years ?
 
I don't understand why you consider chatbots as the flagship example of thinking machines.

Have you watched the discovery channel in, oh I dunno, the last 20 years ?

You have jumped to an unwarranted conclusion (again). Who said I consider chatbots as the flagship example of thinking machines? It is because of the hackers outlandish claim that simpletons believe that they are true "thinking machines" that got my attention. This makes them an accessible target that must be looked into. That's all.

You need to stop trying to "win," and so jumping to unwarranted conclusions to bolster your side. That's not what analysis is about. Thank you.
 
All of these show that thinking like a human requires being a human. Now they're going to try to map that in broad brush strokes, but it goes all the way down to the micro level. Until you get to the fact that to think like a human you have to be a human. The system is interconnected and interdependent. Data is form.

Well be careful with your wording -- to think "like" a human only requires being "like" a human.

To think "exactly like" a human requires being "exactly like" a human.

The interesting question is this: given that no two humans think exactly alike, is it possible to get a machine to think "as similar to me as another human thinks like me?" Maybe, maybe not.

However I don't doubt that a machine like Data certainly exhibits many of the same subjective experiences as most humans.
 
You need to stop trying to "win," and so jumping to unwarranted conclusions to bolster your side. That's not what analysis is about. Thank you.

Sorry if it came off that way, that isn't my intention.

Please realize that the history of this debate, especially on this forum, is mired in misdirection caused by people repeatedly treating decades old obsolete technology as "the best of the best" so to speak, and making similarly obsolete arguments based on this false information.

So I am very sensitive to that, I didn't mean to jump on you.
 
Er...then it's not a brain simulation, is it. It's a brain. So instead of AI you have a human brain with some robotic limbs. Behaving in the plastic way that human brains do to input. So what?

So what, is that given the history of this thread, you are far closer to the position of people like pixy and myself than you are to the others.

Because we just spent 50 pages arguing about why a sufficiently detailed simulation of a brain, hooked up to a physical human body, would indeed produce consciousness just like a biological brain.

Turns out, a number of people wholeheartedly disagree. A number of people contend that anything running in a computer just will never be conscious, even if it is a particle scale simulation of the brain.

Meaning, it is good to hear your position on that scenario. It sounds like your primary argument is that human-like consciousness requires human-like information processing? Which I don't think anyone disagrees with, even Pixy.
 
I have an auto-immune dis-ease called Ankylosing Spondolytis.
Had it since I was 20.
I am now 42.

What I have learnt from being specifically dis-eased with an auto-immune dis-ease is that my thoughts are profoundly affected by my feeling pain in my joints.
Dis-ease is really the feeling of uniqueness. The feeling and the corresponding thought that you have is something others around you don't have. In the set of all your feelings there are percepts were and which are unique to your brain.

All doctors are doing is given you the corresponding thought to what you feel.The only "thing" is what you feel. Feeling is our direct relationship with reality. It is the only monism. All knowledge is empirical.

Kant had things back to front. The senses don't lie, the thoughts do. There is only sense phenomena and these we experience directly all the time. Thoughts are simply what we invent to differentiate these phenomena. There is no such "thing" as a "horse". There are only real sense phenomena which we experience and we invented a word "horse" to summarize these sense phenomena. Phenomena = feelings = percepts.

The scientific method is simply a way of improving our naming skills. It's not like the names we predict are what we actually find. No, all scientific knowledge is right tell its wrong. Simply because a name contains a set of percepts does not mean it contains all the percepts. If we experience new percepts which make our set distinct then we find a new name for this new set.

I have the thoughts I have because of the dis-ease I feel. I identify myself by the thoughts I have had and continue to have.
Everyone experiences some dis-ease in life. Some chronic some acute. Even the thought I have "no feelings" is a type of dis-sease which will affect ones thoughts.

Our characters are the feelings-corresponding thoughts acted out/ willed.
Our willing corresponds to a set of thoughts each of which correspond to a set of feelings.
How our thoughts form sets of willing is what psychologists study.
The problem is that these thoughts correspond to sets of percepts. The approach of ignore certain percepts which have shaped our thoughts is medieval. If we are to have health we need to tackle the percepts which are the cause the dis-ease.
Health can only originate through the senses. When I take my anti-inflammatories they help stop the sense of pain so I can move from the set of uniqueness towards the set of collective.I feel better being like others again. Being at-ease again. I get a chance to realize what I want. I realize that the percepts I had, noise for instance, was making me dis-eased and I need to avoid it in future. This shapes my thoughts and willful action about doing this in the future so that I won't have those percepts again which made me feel dis-eased.

This relationship between my percepts, concepts and action is what makes me me(i.e. my "I" my consciousness).
I require dis-ease to establish this distinction. No dis-ease no distinction.

Computers that feel no dis-ease will not have consciousness by definition of consciousness being what makes them feel distinct.

Consciousness is either distinct or else it's meaningless, right Pixy?

But ... none of this is inconsistent with anything we are talking about. At least, nothing that I am talking about.

I fully understand, and embrace, the idea that for an A.I. to be conscious like a human it will require the same sense of embodiment, emotion, and feeling as a human. This is non-controversial, especially in the research of today, since as westprog is so fond of putting it the classical approaches to A.I. have failed to generate anything near what they hoped to.

What you need to realize, and I hope you might take the time to learn about it since I find it utterly fascinating, is that such a sense of embodiment, emotion, and feeling, possibly ( and from the viewpoint of people like me, I would say "likely" ) arises from processing information about the world in a certain way. There has been good research on this for the last 10 years, and the more papers I read the more sense it makes to me.
 

That wiki article is incorrect in stating that the theory of embodied cognition is at odds with computationalism.

That is simply false. Anyone can see it is false by just reading any of the proceedings from the myriad of symposiums on machine consciousness in the last 10 years, all of which contain a great deal of papers written that deal explicitly embodiment.
 
That wiki article is incorrect in stating that the theory of embodied cognition is at odds with computationalism.

That is simply false. Anyone can see it is false by just reading any of the proceedings from the myriad of symposiums on machine consciousness in the last 10 years, all of which contain a great deal of papers written that deal explicitly embodiment.

True, AI is embracing embodiment. However, the logic of that acceptance is that form does relate to function. You may well get it to a level that is considered 'good enough' (subjectively) to perform whatever function. However, there will always be a distinction. It just depends how obvious that will be. Hey, even some humans can fool other humans, otherwise lying wouldn't work. And it's machines that can reveal an incorrect human reading (e.g. polygraph). Studies of psychopathy have found that psychopaths learn to mimic the emotional signals of others in order to manipulate, even though they don't actually feel the emotions they are mimicking. So I'm not disputing it would be possible to make a machine that could fool others into thinking it's human.

What is more exciting than machines doing things that humans can already do, however, is enabling humans to be able to do things that at the moment only machines can do. If you can give a machine the capability to compute the best route from a to b in a nanosecond, speak every single known language, see X-rays, detect lying, carbon date a fossil, why not give that capability to a human? That's where Artificial Intelligence will/could really be powerful imho. The human/AI hybrid.
 
If my statements 'table-look-up ', ' if/then statements', 'database lookups' incorrectly describe what Watson does either by hard code, by trained neural nets, or some newer techniques please provide a 2 or 3 sentence statement of how it works. I've had no luck with what I can find out about it in terms I and many others can understand.

First of all, just because code is written using branch instructions does not imply it is a gigantic "if/then" block. Conceptually, the operation of your neurons is the same as an "if/then" statement, yet multiple neurons acting together certainly produce results that you would not consider a gigantic "if/then" block.

So, a trained artificial neural network is NOT a look-up table or if/then block or a set of database lookups. Information flows through an artificial neural network exactly like it does through a biological one -- if you tried to look at an individual neuron and make sense of it, you would have no chance. There isn't some variable in the neural network code that corresponds to anything you could make sense of at the level of the entire system.

Furthermore, you can't train neural networks by programming them. You program the code for the nodes ( neurons ), throw a bunch of them together, and then supply the system with repeated instances of something it is supposed to react to ( more or less ). What the system does when it "learns" is beyond your understanding -- the edge weights and behavior of the individual nodes is now out of anyone's hands. At this point it is genuinely no longer just "programmed" because the programmer has literally no idea whatsoever what any of the edge weights or individual node behavior means.

I don't know that Watson makes use of neural networks, though. I was just elaborating on one issue I saw with your language.

I can't find much information on the exact software architecture used by Watson, but based on some diagrams ( like the one wikipedia has ) for the high level flow I can guess how I myself would have programmed it to function if my boss told me to use that flow. However it will take some time to get it all together, gimme a bit.

This quote for example I find meaningless "there are systems like Watson that use more abstract elements of consciousness models of thinking in their processing (multiple drafts, global workspace, associative mapping, etc), and implementations of the CERA-CRANIUM cognitive architecture" courtesy of dlorde. What are my words lacking that don't cover this?

"multiple drafts" refers to the idea that instead of an "if/then" type information flow, you have sub-modules that add their little "piece" to a "draft" representing either an incoming percept or an outgoing action ( which can be totally "imagined," meaning the percept doesn't come from external reality and the action doesn't go to external reality but rather come from/go to somewhere internal to the system ) that is then "evaluated" by the "core" module of the system, which in terms of consciousness is our conscious train of thought. This "core" does something with the "draft," adds "feedback" to the draft, and another iteration is performed. Thus this paradigm treats consciousness as a "core" that is "aware" of only the "drafts" being given to it, constantly churning over them, and spitting them back into the lower levels of the system with some feedback.

"global workspace" refers to the idea that instead of an "if/then" information flow, you have sub-modules with access to a shared workspace of information, and when they see something that "interests" them they act on the information and then spit a modified version of it back into the workspace. There is also a "core" module, which again is the "conscious awareness" part, that monitors the shared workspace for anything it finds interesting. When such a thing is there, it becomes aware of it, does something to it, and spits it back into the shared workspace.

Note that in both of these models the "core" isn't necessarily something "core" to the functioning of the system, it could also just be whatever is "consciously aware" of stuff. It is obvious that we humans can do many complex tasks without really being conscious of them, or certainly that the actions in those tasks are not front and center in our conscious train of thought.

The fascinating thing about both of these models is that 1) they work really well with neural networks and 2) they align very well with the way human thought seems to occur.

For example, in the global workspace model, the following information flow makes sense:
0) someone turns around and there is a person there, so photons from that face hit the retina of the observer
1) percepts come in from the world and get placed in the global workspace after being filtered through our visual processing system
2) sub-modules look at the individual percepts and see if they can "aggregate" them into more complex percepts, if they can, they spit those more complex "aggregated" percepts into the workspace for other modules to then use.
3) at some point a sequence of sub-modules has built up a piece of information that the core module might recognize as "a person." Since this is sitting there in the global workspace the core module eventually recognizes that information.
4) the core module reacts to the "face" information somehow, perhaps by injecting information into the shared workspace that the body should respond to the face
5) sub-modules look at this "should respond" action and decompose it into possible actions, like speaking, waving a hand, whatever, and inject information about those smaller actions back into the shared workspace
6) maybe the core sees the possible responses and weights one higher than others, or maybe this is done by some other sub-module, in any case, a winning action is selected and put back into the shared workspace
7) this winning action is further decomposed into atomic actions by other sub-modules and the information is sent to the body, where it reacts properly ( by speaking or waving a hand or whatever ).

Now think about just how close this is to our mechanism of operation -- when you see a person, do you see the individual parts of their body? No, of course not, your conscious awareness just recognizes them as a person in one fell swoop. Where did that aggregation come from? And when you respond to seeing a person, do you consciously decide what shapes your larynx should make as you vocalize? Do you decide how to move your arm to wave? Of course not, you just sort of "do" it and some sub-levels of your brain that don't normally pay attention to decompose those high level actions into the atomic steps that your body should take to satisfy them. Do you even decide to "respond" when someone says "hello" or is it sort of automatic? This model accounts for why all of those things happen the way they do.

Finally, "associative mapping" refers to associative memory, which is formally known as content-addressable memory. This is memory that instead of addressing it by address, like in computer memory, you give it some piece of "content" and it "gives back" the thing it remembers that is "closest" to the input. Not surprisingly, neural networks use content-addressable memory.

Some fascinating things about such neural networks ( you can google "hopfield network" since that is the most famous kind ) -- the closer the "input" is to the actual "memory" the faster the system will converge on that memory ( sounds like your memory, doesn't it ? ) and the more "memories" the system has been trained on the longer it takes to converge ( sounds like your memory, doesn't it ? ) and finally if the system is loaded with memories past some threshold number ( which is related to how many nodes the network has ) there is a higher chance of either converging on the wrong memory or not converging at all ( sounds like your memory, doesn't it ? ).

One final note, which you hopefully find extremely fascinating, is that you can use content-addressable memory to perform a sort of implicit logical inference by varying the input. For example, when someone asks you "name some things that are red," what is the process your mind uses? I won't poison the question, but what you can do with content-addressable memory is first just throw "red" at it and see what comes back, and then start throwing multiple things at it, like "red vehicle" or "red fruit" or "red painting" and see what comes back, and then go even further with things like "red rothko painting" or "red sunset," and when you put stuff like that in, the system might spit back out memories such as "I saw that red rothko painting at the Dallas museum of art" or "the prettiest sunset I ever saw was when I got that flat tire near El Paso" because those things are actual memories in the system.

I say that is "implicit inference" because logically the state "red car" does have a connection to "that red audi I saw yesterday," and you can easily answer logical questions like "have you ever seen a green audi" by just coming up with zero memories of such a thing. No, I have never seen a green audi -- and I didn't even need to parse the sentence like a computer such as Watson would need to parse it, I can rely on my associative memory to give me the logically correct answer.
 
Last edited:
A computer knows nothing of information because information is meaning that is reliant on subjective sense-based interpretation.

Interpretation is just more information. You are not saying anything that precludes computers from "caring" or "knowing" about the information.

So one human brain-form can't function identically to another human brain-form, but a computer brain-form can function identically to a human brain-form...

No, but not functioning precisely like a brain doesn't mean one can't get the same results with something else. Otherwise two humans couldn't possibly think the same way or reach the same conclusions.

For instance, cars and bikes don't work the same but they can both get you to your office just fine.
 
This quote for example I find meaningless "there are systems like Watson that use more abstract elements of consciousness models of thinking in their processing (multiple drafts, global workspace, associative mapping, etc), and implementations of the CERA-CRANIUM cognitive architecture" courtesy of dlorde. What are my words lacking that don't cover this?
This article covers models of consciousness quite well. Because these are models and deal with relatively high-level abstractions, they can be considered independently of implementation details. Table look-ups, if-then operations, and database look-ups are instances of relatively low-level abstractions such as associative mapping and switching, which are also involved in human cognition using different implementations. Describing functionality in terms of abstractions allows flexibility of implementation.
 
Actually, I was cherry picking an animation of a ribosome in action that was a compromise between a schematic representation and an ultra-realistic one, that would show how machine-like it functioned. I picked one that was good visually. Of course, it made me smile that the narrator called it a machine and compared its reading of mRNA to reading a computer tape.

I learned about the amazing ribosome machine from the BBC program "The Cell" which has a really nice ribosome animation sequence. In part 3 it talks about how Craig Ventor led a team to create a synthetic living cell.

Google "cellular machinery" and you get 25 million hits. A tsunami of that size requires no cherry picking.

Here's a thought experiment about animations like this:

Start with the ribosome animation. Flesh it out so it's as operational as those in a living cell, such that it inputs a complete emulated mRNA and outputs a complete protein. Add all the other machinery to the animation to complete a cell (pick a nerve cell). Do this with enough nerve cells to complete a full brain simulation. Add the following physical (not emulated) peripherals: five senses for input, and a robot body for output. Now explain specifically what's missing that would make it not conscious.

Bringing up Venter is very relevant in highlighting the language usage issue.

Anything that comes out of Venters lab requires careful reading.
The first thing to do is ignore the media headline language which usually uses language which people are familiar with to glamorize their work. Claiming to have "created" a cell for instance.

Let's look at the press release.

Researchers at the J. Craig Venter Institute (JCVI), a not-for-profit genomic research organization, published results today describing the successful construction of the first self-replicating, synthetic bacterial cell. The team synthesized the 1.08 million base pair chromosome of a modified Mycoplasma mycoides genome. The synthetic cell is called Mycoplasma mycoides JCVI-syn1.0 and is the proof of principle that genomes can be designed in the computer, chemically made in the laboratory and transplanted into a recipient cell to produce a new self-replicating cell controlled only by the synthetic genome.
http://www.jcvi.org/cms/press/press...ucted-by-j-craig-venter-institute-researcher/

What you will notice is that they did not "create" a cell anymore than a plant breeder "creates" a new strain of apples. They also did not "create" synthetic DNA anymore than an animal breeder "creates" a cow with more milk.

This is what they said they did.

The JCVI team employed a three stage process using their previously described yeast assembly system to build the genome using the 1,078 cassettes. The first stage involved taking 10 cassettes of DNA at a time to build 110, 10,000 bp segments. In the second stage, these 10,000 bp segments are taken 10 at a time to produce eleven, 100,000 bp segments. In the final step, all 11, 100 kb segments were assembled into the complete synthetic genome in yeast cells and grown as a yeast artificial chromosome.

The complete synthetic M. mycoides genome was isolated from the yeast cell and transplanted into Mycoplasma capricolum recipient cells that have had the genes for its restriction enzyme removed. The synthetic genome DNA was transcribed into messenger RNA, which in turn was translated into new proteins. The M. capricolum genome was either destroyed by M. mycoides restriction enzymes or was lost during cell replication. After two days viable M. mycoides cells, which contained only synthetic DNA, were clearly visible on petri dishes containing bacterial growth medium.[/ quote]

Notice how they conveniently change the language from referring to the receiving cells "Mycoplasma capricolum" to referring to the "synthetic" cells "M. mycoides" as the process proceeds. The only thing they "created" was the name M. mycoides.

No yeast DNA no "synthetic DNA".
No M. capricolum cell no "synthetic cell"

They did not "create" a synthetic cell they did not even "create" synthetic DNA.

They invented a new name M. mycoides by defining a certain sequence of base pairs as M. mycoides.

The fact is that yeast cells create DNA and M. capricolum create new cells.

There are no such things as synthetic DNA or synthetic cells.
Just DNA and cells.

We know how they are "created" it's called biology.
If you want to pretend you built an assembly line of machines that create DNA and cells that's a fine metaphor for selling your ideas to people that understand machines, factories and commerce.
However it has nothing to do with biology.

Why is it that those most interested in artificial "the real thing " haven't a clue as to what "the real thing" actually is?
They appear to think that calling "the real thing" by what's familiar to them makes it more real than "the real thing". No, there is no way around the school of hard knocks. Starting with generalizations is not the way we learn anything. Anyone who actually creates knows the devil is in the detail.
 
OTOH the Chinese Room argument can be applied to a Chinese brain - none of the neurons that make up the system understand Chinese, but the system as a whole is said to. Some rebuttals of the Chinese Room are covered here.



Thanks for that link....have you read it? I suggest you read it again intently this time to see why it is not a rebuttal but in fact more of a affirmation.
 
What is more exciting than machines doing things that humans can already do, however, is enabling humans to be able to do things that at the moment only machines can do. If you can give a machine the capability to compute the best route from a to b in a nanosecond, speak every single known language, see X-rays, detect lying, carbon date a fossil, why not give that capability to a human? That's where Artificial Intelligence will/could really be powerful imho. The human/AI hybrid.

I completely agree but I think you can't have one without the other.

For example, how would we interface an arithmetic "chip" with the rest of our brain, such that we could use it by just thinking about some complex arithmetic operation and having the result instantly, like with simple math?

Meaning, I can currently just "say" that 5 X 6 = 30 because it is memorized. I can also just "say " that 5 + 30 = 35, because it is simple. How would I get a chip to tell me 2343535 x 23562762 = ?? in the same thought?

To do this will require a full understanding of just how we think, or at least part of how we think, which implies an understanding of how to get machines to think in the same way. I don't see how else to figure out what the brain/computer interface would be like.
 
We're still talking about a simulation. If you are arguing that a sufficiently detailed simulation of a human is a human, then that's a curious choice of semantics, but not necessarily something I'd disagree with.

So this is a computer program.

Exactly! A perfect simulation of the brain is not the brain. It's a computer. No one has offered an explanation of why it wouldn't be conscious without resorting to metaphysics.

I'm still waiting for !kaggen to explain why she denies the ribosome is a machine.
 
Interpretation is just more information. You are not saying anything that precludes computers from "caring" or "knowing" about the information.



No, but not functioning precisely like a brain doesn't mean one can't get the same results with something else. Otherwise two humans couldn't possibly think the same way or reach the same conclusions.

For instance, cars and bikes don't work the same but they can both get you to your office just fine.


Transport from a to b is just one purpose. But a bicycle and a car fulfil this purpose in significantly different ways. And their form limits, or enables, additional functions they may perform - shelter / exercise / speed/ haulage / and so on.

So if you get machine A to perform function 1 and also get machine B to perform function 1 they might both do it. But now you find machine A also has the capability of performing function 2 - and the difference in form of machine B means it can't perform function 2. So you have to see if it is possible to make an adaptation to machine B to get it to perform function 2. A human is multifunctional. Add more and more layers of complexity (and various functions that each form shift enables/limits) and you'll find that you are having to evolve the entire form of machine A and machine B towards each other if you want them to perform all of the same functions. Until they are identical you will ALWAYS find there is a function (or functions) that one machine performs and the other can't. And, significantly, that each time you use a different form in machine B to machine A to complete the same function, you are introducing difficulties for aligning the further functions.

Now, you may argue that humans vary in abilities from each other too. And this is true. But there are basic underlying forms (and therefore functions) between humans. There must be otherwise we wouldn't be able to use the taxonomic label 'human'. Just as a bicycle will always be a bicycle and a car a car.

Quite why anyone would want to artificially make a machine identical to a human I'm not sure :D. We already have a machine identical to a human. Rather too many versions for that matter. I guess it might have something to do with curiosity and the urge to be able to fix the ones that break.
 
Status
Not open for further replies.

Back
Top Bottom