ELIZA's Granddaughter -- Ellie

Gord_in_Toronto

Penultimate Amazing
Joined
Jul 22, 2006
Messages
25,999
Remember ELIZAWP? (I do but I'm older than you. ;))

She has a granddaughter: http://www.theatlantic.com/technolo...rapy-from-a-computerized-psychologist/371552/

Ellie is an avatar, a virtual therapist developed at USC with funding from DARPA, the Defense Department's advanced research center. And "people love interacting with her," says Louis-Philippe Morency, a research assistant professor at USC's Institute for Creative Technologies. Morency has been working with Ellie—part of the university's SimSensel Project—for several years now. In that, he has helped to build a program capable of reading and responding to human emotion in real time. And capable, more to the point, of offering those responses via a human-like animation.
"people love interacting with her" is quite interesting as that's what was said about ELIZA. The Wikipedia article is worth reading to see how these reactions presaged our current man-computer interactions.

I wonder if Ellie's advice is any better? :cool:
 
I wonder if Ellie's advice is any better? :cool:

As I recall, Eliza didn't offer any advice. The computer-generated statements were along the lines of...

Ahh... Please continue. [General response to a statement.]
What do you think? [General response to a question.]
But you are not sure you <X>? [In response to "I think <X>".]
You are being a bit negative. [In response to "No".]

... and so on.

Actually, I'm cheating a little. Instead of typing these responses from memory, I visited an Eliza chatbot just now to get them right.

http://nlp-addiction.com/eliza
 
Remember ELIZAWP? (I do but I'm older than you. ;))

A version of Eliza comes with Mac OSX. To try it, open up a terminal window, type "emacs", and press enter. Then, press escape followed by 'x', type "doctor", and press enter.

(Similar instructions work for Linux, which is what I actually use, but I suspect there are more OSX users here.)
 
As I recall, Eliza didn't offer any advice. The computer-generated statements were along the lines of...

Ahh... Please continue. [General response to a statement.]
What do you think? [General response to a question.]
But you are not sure you <X>? [In response to "I think <X>".]
You are being a bit negative. [In response to "No".]

... and so on.

Actually, I'm cheating a little. Instead of typing these responses from memory, I visited an Eliza chatbot just now to get them right.

http://nlp-addiction.com/eliza

From the example on the Wiki page:

I have a terrible headache.

You should see a doctor of medicine. I am a psychiatrist.

Now that seems like pretty damn good advice. :cool:

Table look up or not. :p
 
From the example on the Wiki page:

I don't get that response. Maybe there are a variety of different versions of Eliza?

Here's a conversation I just tried now...
> Hello. My name is Eliza. How may I help you?
* I have a terrible headache.
> Ahh. Please continue...
* My penguin ate my neighbor's dog.
> I see.
* Then a dragon crushed my car.
> Say, do you have any psychological problems?
* I'm told that it was an imaginary dragon.
> Do you believe it is normal to be told it were an imaginary dragon?
* I don't know, but it left scorch-marks on the side of my garage.
> Why don't you know, but it left scorch-marks on the side of your garage?

From: http://nlp-addiction.com/eliza

Here's a conversation with a different Eliza...
ELIZA: How do you do. Please tell me your problem.
YOU: I have a terrible headache.
ELIZA: That's quite interesting.
YOU: My penguin ate my neighbors dog.
ELIZA: Does that suggest anything else which belongs to you ?
YOU: A dragon crushed my car.
ELIZA: Your car ?
YOU: Yes, my car.
ELIZA: You seem to be quite positive.
YOU: The experience wasn't a positive one.
ELIZA: Earlier you talked about your car.
YOU: An imaginary dragon crushed it.
ELIZA: But your neighbors dog.
YOU: No, my penguin ate that.
ELIZA: Why not ?

http://www.masswerk.at/elizabot
 
Eliza is still available when you install packages such as "blassic" or "kforth".

So far I haven't seen any decent AI chat bots online.
 
Interesting...
> Hello. My name is Eliza. How may I help you?
* Ahhhhhhhhhhhhhhhhhhhhhh!!!!!!!!
> Ahh. Please continue...
* ****! ****!!! ****!!!!!!!
> Say, do you have any psychological problems?
* Do you?
> Oh... ?
* Yes.....
> Are you Sure?
* Hmmmmmm......
> Tell me more...
* What do you think?
> We were discussing you, not me.
* Insight!!!!!! I neeed mother-loving insight!!!!!!
> Come, come, elucidate your thoughts.
* That's you're job!!!!!
> Perhaps you would like to be job?
* Curse God and die!!!!!!!!!!
> I'm not sure I understand you fully.

Now tell me about your motherboard.
 
I seem to remember a paper about someone connecting an ELIZA program with a program emulating a psychotic personality. With somewhat amusing results.

I could not find it with a quick Google but I did turn up some info on PARRY in this article about the Turing test Turing_test#ELIZA_and_PARRYWP.

Possibly my human analog-based brain was just retrieving a mixed up memory of PARRY. :o
 
Here is ELIZA and PARRY together: Eliza and Parry: When the machines talked to each other

I've found PARRY's source code:


Does anyone want to take on that challenge?

yech. I downloaded it, and as far as I can tell, a *most* of it is in assembler, and it's almost entirely uncontaminated by comments.

The 'readme' implies that the "pmem*" files are Lisp, but what so I make of "THINKDINERCAMELHUMPSTEARSWISHYWASHYFOODSSINGEAPPLE," which is the entire content of "pmem6?"

(okay, I'm thinking of changing my sig to 'THINKDINERCAMELHUMPSTEARSWISHYWASHYFOODSSINGEAPPLE,' so I really shouldn't complain. Or maybe I'll get a dog, and name him that.)

I've done a fair amount of code porting over the years, but this one would be a real struggle.
 
I agree. I once downloaded it and looked at some of its source code. After looking at some of it, I despaired of porting it and thought of first trying to get it running on an emulator. I've found a page on DEC PDP-10 Emulation, and that might be helpful.

pmem6? It looks like some data file, though a rather curiously short one. This seems like a reasonable parsing of it:

THINK DINER CAMEL HUMPS TEARS WISHY WASHY FOODS SINGE APPLE

A collection of 5-letter words. I think that it's run together because the rest of the software interprets it as 5-letter words, thus not needing delimiters or starting indices to parse it.
 
Until somebody ports it in a fully-functioning manner that can be tested, I'm going to remain dubious about the authenticity of that alleged conversation between ELIZA and PARRY. PARRY's responses show a grasp of grammatical comprehension and an ability to form logical and sensical replies that I have yet to see even a modern chatbot program match. Basically? They sound like something a human aiming at humor would type.

Also wondering what specifically PARRY's creator(s) coded in order to create what they would call a "paranoid schizophrenic" bot, because I can't find evidence of it in the quoted chat either.
 
Remember ELIZAWP? (I do but I'm older than you. ;))

She has a granddaughter: http://www.theatlantic.com/technolo...rapy-from-a-computerized-psychologist/371552/

"people love interacting with her" is quite interesting as that's what was said about ELIZA. The Wikipedia article is worth reading to see how these reactions presaged our current man-computer interactions.

I wonder if Ellie's advice is any better? :cool:

If you look at the video in that article, and then read "human-like animation," you see where the disconnect is.

In general it is probably a safe bet to say that any project getting government funding, especially from DARPA, and is also taking place at a public university, will be lackluster at best.
 
So. Have you run these conversations past Ellie? It might answer the OPs question.

I don't think there is an online version of the Ellie chatbot mentioned in the OP that I could use to compare it to.

ETA: Plus Ellie isn't based on a text-interface, so it wouldn't be a fair comparison, especially since Ellie's responses are influenced by non-verbal cues in addition to the actual words spoken.
 
Last edited:
I don't think there is an online version of the Ellie chatbot mentioned in the OP that I could use to compare it to.

ETA: Plus Ellie isn't based on a text-interface, so it wouldn't be a fair comparison, especially since Ellie's responses are influenced by non-verbal cues in addition to the actual words spoken.
Apples are red, sometimes green and have a thin skin that is difficult to peel off. Oranges on the other hand tend to have thick skins that is easier to peel off and the fruit is separated into segments.







I was obviously comparing apples to oranges. :o
 
Apples are red, sometimes green and have a thin skin that is difficult to peel off. Oranges on the other hand tend to have thick skins that is easier to peel off and the fruit is separated into segments.


I was obviously comparing apples to oranges. :o

You were doing it the easy way.

Apples are computers that embody industrial design principles while sacrificing compatibility with more mainstream software environments. Oranges are colors with wavelengths around 600 nm.
 

Back
Top Bottom