rocketdodger
Philosopher
- Joined
- Jun 22, 2005
- Messages
- 6,946
I'm stuck on; This statement is false - True or false?
This might help:
http://en.wikipedia.org/wiki/Liar_paradox
I'm stuck on; This statement is false - True or false?
Ok, I misspoke then.
However I have to wonder why, if you are an A.I. programmer, you ever believed that a syntactic chatbot understood anything.
The minute I learned about true reasoning in the first A.I. course I took I knew that a chatbot was just smoke and mirrors.
But if these syntactic-semantic machines fail the Syntactic BS Detector Test as bad as the modern chatbots do (the modern day examples of "thinking machines") then they also are not truly thinking i.e., they don't understand a damn word of what you are saying.
I don't understand why you consider chatbots as the flagship example of thinking machines.
Have you watched the discovery channel in, oh I dunno, the last 20 years ?
All of these show that thinking like a human requires being a human. Now they're going to try to map that in broad brush strokes, but it goes all the way down to the micro level. Until you get to the fact that to think like a human you have to be a human. The system is interconnected and interdependent. Data is form.
You need to stop trying to "win," and so jumping to unwarranted conclusions to bolster your side. That's not what analysis is about. Thank you.
Er...then it's not a brain simulation, is it. It's a brain. So instead of AI you have a human brain with some robotic limbs. Behaving in the plastic way that human brains do to input. So what?
I have an auto-immune dis-ease called Ankylosing Spondolytis.
Had it since I was 20.
I am now 42.
What I have learnt from being specifically dis-eased with an auto-immune dis-ease is that my thoughts are profoundly affected by my feeling pain in my joints.
Dis-ease is really the feeling of uniqueness. The feeling and the corresponding thought that you have is something others around you don't have. In the set of all your feelings there are percepts were and which are unique to your brain.
All doctors are doing is given you the corresponding thought to what you feel.The only "thing" is what you feel. Feeling is our direct relationship with reality. It is the only monism. All knowledge is empirical.
Kant had things back to front. The senses don't lie, the thoughts do. There is only sense phenomena and these we experience directly all the time. Thoughts are simply what we invent to differentiate these phenomena. There is no such "thing" as a "horse". There are only real sense phenomena which we experience and we invented a word "horse" to summarize these sense phenomena. Phenomena = feelings = percepts.
The scientific method is simply a way of improving our naming skills. It's not like the names we predict are what we actually find. No, all scientific knowledge is right tell its wrong. Simply because a name contains a set of percepts does not mean it contains all the percepts. If we experience new percepts which make our set distinct then we find a new name for this new set.
I have the thoughts I have because of the dis-ease I feel. I identify myself by the thoughts I have had and continue to have.
Everyone experiences some dis-ease in life. Some chronic some acute. Even the thought I have "no feelings" is a type of dis-sease which will affect ones thoughts.
Our characters are the feelings-corresponding thoughts acted out/ willed.
Our willing corresponds to a set of thoughts each of which correspond to a set of feelings.
How our thoughts form sets of willing is what psychologists study.
The problem is that these thoughts correspond to sets of percepts. The approach of ignore certain percepts which have shaped our thoughts is medieval. If we are to have health we need to tackle the percepts which are the cause the dis-ease.
Health can only originate through the senses. When I take my anti-inflammatories they help stop the sense of pain so I can move from the set of uniqueness towards the set of collective.I feel better being like others again. Being at-ease again. I get a chance to realize what I want. I realize that the percepts I had, noise for instance, was making me dis-eased and I need to avoid it in future. This shapes my thoughts and willful action about doing this in the future so that I won't have those percepts again which made me feel dis-eased.
This relationship between my percepts, concepts and action is what makes me me(i.e. my "I" my consciousness).
I require dis-ease to establish this distinction. No dis-ease no distinction.
Computers that feel no dis-ease will not have consciousness by definition of consciousness being what makes them feel distinct.
Consciousness is either distinct or else it's meaningless, right Pixy?
That wiki article is incorrect in stating that the theory of embodied cognition is at odds with computationalism.
That is simply false. Anyone can see it is false by just reading any of the proceedings from the myriad of symposiums on machine consciousness in the last 10 years, all of which contain a great deal of papers written that deal explicitly embodiment.
If my statements 'table-look-up ', ' if/then statements', 'database lookups' incorrectly describe what Watson does either by hard code, by trained neural nets, or some newer techniques please provide a 2 or 3 sentence statement of how it works. I've had no luck with what I can find out about it in terms I and many others can understand.
This quote for example I find meaningless "there are systems like Watson that use more abstract elements of consciousness models of thinking in their processing (multiple drafts, global workspace, associative mapping, etc), and implementations of the CERA-CRANIUM cognitive architecture" courtesy of dlorde. What are my words lacking that don't cover this?
Because there is nothing beyond material, physical reality.
A computer knows nothing of information because information is meaning that is reliant on subjective sense-based interpretation.
So one human brain-form can't function identically to another human brain-form, but a computer brain-form can function identically to a human brain-form...
This article covers models of consciousness quite well. Because these are models and deal with relatively high-level abstractions, they can be considered independently of implementation details. Table look-ups, if-then operations, and database look-ups are instances of relatively low-level abstractions such as associative mapping and switching, which are also involved in human cognition using different implementations. Describing functionality in terms of abstractions allows flexibility of implementation.This quote for example I find meaningless "there are systems like Watson that use more abstract elements of consciousness models of thinking in their processing (multiple drafts, global workspace, associative mapping, etc), and implementations of the CERA-CRANIUM cognitive architecture" courtesy of dlorde. What are my words lacking that don't cover this?
Actually, I was cherry picking an animation of a ribosome in action that was a compromise between a schematic representation and an ultra-realistic one, that would show how machine-like it functioned. I picked one that was good visually. Of course, it made me smile that the narrator called it a machine and compared its reading of mRNA to reading a computer tape.
I learned about the amazing ribosome machine from the BBC program "The Cell" which has a really nice ribosome animation sequence. In part 3 it talks about how Craig Ventor led a team to create a synthetic living cell.
Google "cellular machinery" and you get 25 million hits. A tsunami of that size requires no cherry picking.
Here's a thought experiment about animations like this:
Start with the ribosome animation. Flesh it out so it's as operational as those in a living cell, such that it inputs a complete emulated mRNA and outputs a complete protein. Add all the other machinery to the animation to complete a cell (pick a nerve cell). Do this with enough nerve cells to complete a full brain simulation. Add the following physical (not emulated) peripherals: five senses for input, and a robot body for output. Now explain specifically what's missing that would make it not conscious.
http://www.jcvi.org/cms/press/press...ucted-by-j-craig-venter-institute-researcher/Researchers at the J. Craig Venter Institute (JCVI), a not-for-profit genomic research organization, published results today describing the successful construction of the first self-replicating, synthetic bacterial cell. The team synthesized the 1.08 million base pair chromosome of a modified Mycoplasma mycoides genome. The synthetic cell is called Mycoplasma mycoides JCVI-syn1.0 and is the proof of principle that genomes can be designed in the computer, chemically made in the laboratory and transplanted into a recipient cell to produce a new self-replicating cell controlled only by the synthetic genome.
The JCVI team employed a three stage process using their previously described yeast assembly system to build the genome using the 1,078 cassettes. The first stage involved taking 10 cassettes of DNA at a time to build 110, 10,000 bp segments. In the second stage, these 10,000 bp segments are taken 10 at a time to produce eleven, 100,000 bp segments. In the final step, all 11, 100 kb segments were assembled into the complete synthetic genome in yeast cells and grown as a yeast artificial chromosome.
The complete synthetic M. mycoides genome was isolated from the yeast cell and transplanted into Mycoplasma capricolum recipient cells that have had the genes for its restriction enzyme removed. The synthetic genome DNA was transcribed into messenger RNA, which in turn was translated into new proteins. The M. capricolum genome was either destroyed by M. mycoides restriction enzymes or was lost during cell replication. After two days viable M. mycoides cells, which contained only synthetic DNA, were clearly visible on petri dishes containing bacterial growth medium.[/ quote]
Notice how they conveniently change the language from referring to the receiving cells "Mycoplasma capricolum" to referring to the "synthetic" cells "M. mycoides" as the process proceeds. The only thing they "created" was the name M. mycoides.
No yeast DNA no "synthetic DNA".
No M. capricolum cell no "synthetic cell"
They did not "create" a synthetic cell they did not even "create" synthetic DNA.
They invented a new name M. mycoides by defining a certain sequence of base pairs as M. mycoides.
The fact is that yeast cells create DNA and M. capricolum create new cells.
There are no such things as synthetic DNA or synthetic cells.
Just DNA and cells.
We know how they are "created" it's called biology.
If you want to pretend you built an assembly line of machines that create DNA and cells that's a fine metaphor for selling your ideas to people that understand machines, factories and commerce.
However it has nothing to do with biology.
Why is it that those most interested in artificial "the real thing " haven't a clue as to what "the real thing" actually is?
They appear to think that calling "the real thing" by what's familiar to them makes it more real than "the real thing". No, there is no way around the school of hard knocks. Starting with generalizations is not the way we learn anything. Anyone who actually creates knows the devil is in the detail.
OTOH the Chinese Room argument can be applied to a Chinese brain - none of the neurons that make up the system understand Chinese, but the system as a whole is said to. Some rebuttals of the Chinese Room are covered here.
What is more exciting than machines doing things that humans can already do, however, is enabling humans to be able to do things that at the moment only machines can do. If you can give a machine the capability to compute the best route from a to b in a nanosecond, speak every single known language, see X-rays, detect lying, carbon date a fossil, why not give that capability to a human? That's where Artificial Intelligence will/could really be powerful imho. The human/AI hybrid.
We're still talking about a simulation. If you are arguing that a sufficiently detailed simulation of a human is a human, then that's a curious choice of semantics, but not necessarily something I'd disagree with.
So this is a computer program.
Interpretation is just more information. You are not saying anything that precludes computers from "caring" or "knowing" about the information.
No, but not functioning precisely like a brain doesn't mean one can't get the same results with something else. Otherwise two humans couldn't possibly think the same way or reach the same conclusions.
For instance, cars and bikes don't work the same but they can both get you to your office just fine.