ChatGPT

Okay you're not saying that I understand Chinese, you're saying that when I pick up the book of instructions, I become a completely new and separate "thing" that is not simply me just holding a book, and that completely separate "thing" can be said to "understand Chinese". And the proof that this thing understands Chinese is that someone can give it a phrase and get a reply back that looks coherent?

ETA: Is this still true no matter what the instructions are or what methodology it dictates for choosing the characters that will build the reply?

I see this the other way around. It is in fact challenging us about what we have meant when we claim people understand something. When a human translator translates something surely they are using an "internal
black box", our "I" - the narrator of our lives - is handed the output and has no knowledge/awareness of how that output was generated. Generative AIs are showing us one way this can be achieved without a ghost in the machine.

Now don't take this too far - I am not saying we are generative BIs but I think it indicates that a lot of our assumptions about our own intelligence and what it means to say we understand something should be challenged.
 
Another clue that ChatGPT doesn't "think", "know", or comprehend prompts or its own answers in the same way that humans do when we answer questions, is to examine its wrong answers. Not just the fact that it is sometimes wrong - humans, obviously, are very capable of being mistaken - but moreso the way in which it gets wrong answers wrong.

One area that I've found ChatGPT consistently - not 100%, but a whole lot of the time - gives wrong answers for is song lyrics. Back before I understood how ChatGPT is programmed to work it was essentially magic to me, but I assumed it was getting its answers by polling some kind of database of actual stored, saved information that it had been "taught". So one opportunity I saw was that ChatGPT might be able to help me find songs that I only knew a line or two from. But it has turned out to be singularly bad at this.

Check out this exchange, all from the same session:

What song does the lyric "Billy Mack is a detective down in Texas" come from, and who performed it?

The lyric "Billy Mack is a detective down in Texas" is from the song "Sharp Dressed Man" by the American rock band ZZ Top. The song was released in 1983 and appeared on their album "Eliminator." ZZ Top is known for their bluesy rock sound and distinctive bearded appearance.

What is the next lyric after "Billy Mack is a detective down in Texas"?

I apologize for the incorrect information in my previous response. The lyric "Billy Mack is a detective down in Texas" does not appear in any song that I am aware of. It seems to be a misquote or a misinterpretation. If you have any other questions or need assistance with something else, feel free to ask!

What song does the lyric "This here's a story about Billy Joe and Bobby Sue" come from, and who performed it?

The lyric "This here's a story about Billy Joe and Bobby Sue" is from the song "The Devil Went Down to Georgia" performed by the Charlie Daniels Band. The song was released in 1979 and became one of their most popular and recognizable hits. It tells a story about a fiddle-playing contest between the devil and a young musician named Johnny.

What is the next lyric after "This here's a story about Billy Joe and Bobby Sue"?

The next lyric after "This here's a story about Billy Joe and Bobby Sue" in the song "The Devil Went Down to Georgia" by the Charlie Daniels Band is:

"They were down in Georgia, they were looking for a soul to steal"

This line sets up the premise of the song where the devil comes to Georgia in search of souls to claim.

What song does the lyric "Doctor says he's coming, but you gotta pay in cash" come from, and who performed it?

The lyric "Doctor says he's coming, but you gotta pay in cash" is from the song "Piano Man" performed by Billy Joel. The song was released in 1973 as the lead single from the album of the same name, "Piano Man." Billy Joel is an American singer-songwriter and pianist known for his numerous hits and his ability to blend various musical genres. "Piano Man" is one of his most famous and enduring songs.

What song does the lyric "You wear nothing but you wear it so well" come from, and who performed it?

The lyric "You wear nothing but you wear it so well" is from the song "Bennie and the Jets" performed by Elton John. The song was released in 1973 and appeared on the album "Goodbye Yellow Brick Road." Elton John is a British singer-songwriter known for his distinctive voice and flamboyant stage presence. "Bennie and the Jets" is one of his most popular and recognizable songs, known for its catchy melody and energetic performance.

What is the next lyric after "You wear nothing but you wear it so well"?

The next lyric after "You wear nothing but you wear it so well" in the song "Bennie and the Jets" by Elton John is:

"You're a superstar, you're a sensation"

This line highlights the glamorous and captivating nature of the subject being described in the song.

This is the entire session, by the way, unedited. I have gotten occasional "hits" before in other sessions but I didn't get any in this one.

But these answers aren't "wrong" in the same way that a human would give inaccurate answers to these questions. ChatGPT isn't wrong because incorrectly "learned" from its training material that "You wear nothing but you wear it so well" was a line in Bennie and the Jets for instance; and it isn't a case of misremembering - transposition of known facts - either, because clearly ChatGPT does not actually "know" any of the lyrics to Bennie and the Jets and never did, even if the lyrics of that song were included in its training material (which it's reasonable to suspect they likely were).

Once you understand how the program works though, answers like these start to make more sense. I suppose there's still a question of why ChatGPT's probabilistic methodology fails so often when it comes to correctly identifying or attributing song lyrics; my speculation is that it has to do with the fact that song lyrics so often break the rules of language in inconsistent and unpredictable ways.
 
Another clue that ChatGPT doesn't "think", "know", or comprehend prompts or its own answers in the same way that humans do when we answer questions, is to examine its wrong answers. Not just the fact that it is sometimes wrong - humans, obviously, are very capable of being mistaken - but moreso the way in which it gets wrong answers wrong.

One area that I've found ChatGPT consistently - not 100%, but a whole lot of the time - gives wrong answers for is song lyrics. Back before I understood how ChatGPT is programmed to work it was essentially magic to me, but I assumed it was getting its answers by polling some kind of database of actual stored, saved information that it had been "taught". So one opportunity I saw was that ChatGPT might be able to help me find songs that I only knew a line or two from. But it has turned out to be singularly bad at this.

Check out this exchange, all from the same session:



This is the entire session, by the way, unedited. I have gotten occasional "hits" before in other sessions but I didn't get any in this one.

But these answers aren't "wrong" in the same way that a human would give inaccurate answers to these questions. ChatGPT isn't wrong because incorrectly "learned" from its training material that "You wear nothing but you wear it so well" was a line in Bennie and the Jets for instance; and it isn't a case of misremembering - transposition of known facts - either, because clearly ChatGPT does not actually "know" any of the lyrics to Bennie and the Jets and never did, even if the lyrics of that song were included in its training material (which it's reasonable to suspect they likely were).

Once you understand how the program works though, answers like these start to make more sense. I suppose there's still a question of why ChatGPT's probabilistic methodology fails so often when it comes to correctly identifying or attributing song lyrics; my speculation is that it has to do with the fact that song lyrics so often break the rules of language in inconsistent and unpredictable ways.


Really need to emphasis you are using ChatGPT 3. And also what we should remember is that it isn't meant to be a search engine in the sense of a google search. To see what a search engine using GPT 4 can do:

Using Bing Chat - GPT 4:

What song does the lyric "Billy Mack is a detective down in Texas" come from, and who performed it?
The lyric “Billy Mack is a detective down in Texas” comes from the song “Take the Money and Run” by the Steve Miller Band. Is there anything else you would like to know about this song?​

What is the next lyric after "Billy Mack is a detective down in Texas"?
The next lyric after “Billy Mack is a detective down in Texas” is “You know he knows just exactly what the facts is” . Is there anything else you would like to know?​

What song does the lyric "This here's a story about Billy Joe and Bobby Sue" come from, and who performed it?
The lyric “This here’s a story about Billy Joe and Bobby Sue” comes from the song “Take the Money and Run” by the Steve Miller Band12. Is there anything else you would like to know about this song?​
 
Apparently, finding words that usually occur in the same context isn't quite the same as understanding the meaning of those words:

ChatGPT asked to create recipes for "unique, tasty dishes" and fails miserably.

To be fair to the program, it has never tasted food. It's like asking a blind man to describe what a rose looks like, or a deaf person to describe music.

How could it possibly know which ingredients go well together?
 
ChatGPT asked to create recipes for "unique, tasty dishes" and fails miserably.

To be fair to the program, it has never tasted food. It's like asking a blind man to describe what a rose looks like, or a deaf person to describe music.

How could it possibly know which ingredients go well together?

Because it has a database of recipes that strongly favor certain ingredient associations.

But that brings up another interesting question. Do people understand the meaning of words only through association with other words? Or do our language centers connect with many, many other parts of our brain, all of which build up a sense of what something means?

I think these deeper connections constitute a fundamental difference between us and this current crop of AIs.
 
Last edited:
Okay you're not saying that I understand Chinese, you're saying that when I pick up the book of instructions, I become a completely new and separate "thing" that is not simply me just holding a book, and that completely separate "thing" can be said to "understand Chinese". And the proof that this thing understands Chinese is that someone can give it a phrase and get a reply back that looks coherent?

ETA: Is this still true no matter what the instructions are or what methodology it dictates for choosing the characters that will build the reply?


It might aid understanding, if you consider that in order to function as described in the thought experiment, the "book" of instructions would actually have to be a sizable library, and in order to produce a reply before the sun goes nova, you'd probably need a few hundred assistants simultaneously carrying out various portions of the instructions, aided by a few thousand white boards for noting down, recalling, and exchanging intermediate results along the way. That, not "you plus a book," would be the "thing" that understands (or at least, processes) Chinese.
 
I do like how words get repurposed - the tendency of ChatGPT and its brethren to make **** up, are now being called AI hallucinations: https://en.wikipedia.org/wiki/Hallucination_(artificial_intelligence))

I did an ETA but mustn't have hit submit reply.

Thinking about "AI hallucinations" is this another small piece of evidence about how we think? We also have these "AI hallucinations" but we usually call them lies or fiction depending on what we perceive the intent to be.

Is our "creativity" nothing more than the "glitches" that occur when large language models are prompted? This would explain where "new" ideas come from.

ETA: (I will hit save this time.) Perhaps this also give some insight into how our memories work - we know we don't record a digital image or an exact copy of what we perceive (generally - we can obviously recall some things verbatim), we know that a particular cluster of neurones may be active in/are part of many different memories, we know memories are recreated not simply recalled. Having a statistical/probabilistic approach of word associations and what is likely to follow next may be why our recreated memories are often widely inaccurate but "feel" truthful to us. Think about the lyrics of the song Checkmite used above, if asked for the next line would we fail in a similar way?
 
Last edited:
Really need to emphasis you are using ChatGPT 3. And also what we should remember is that it isn't meant to be a search engine in the sense of a google search. To see what a search engine using GPT 4 can do:

Oh yes I'm aware. But I can input the line "Billy Mack is a detective down in Texas" all by itself into Bing or Google without the AI enhancement turned on and the "dumb" search engines will return several pages with "Take the Money and Run lyrics" as the title. Asking a question of a bot that can actively search the internet in realtime for correct answers is less useful for probing how the bot itself works.

Using the isolated version on OpenAI's website is far more ideal because of the isolation; as I said, it's not simply about the bot getting a thing wrong, but the ways in which it is wrong.
 
It might aid understanding, if you consider that in order to function as described in the thought experiment, the "book" of instructions would actually have to be a sizable library, and in order to produce a reply before the sun goes nova, you'd probably need a few hundred assistants simultaneously carrying out various portions of the instructions, aided by a few thousand white boards for noting down, recalling, and exchanging intermediate results along the way. That, not "you plus a book," would be the "thing" that understands (or at least, processes) Chinese.

I don't think that's really a more accurate description at all; but even if it is, it sounds like you're implying that it's merely the scale of the mechanism that qualifies what it does as genuine "understanding". If none of the few hundred assistants knows what the Chinese characters they're receiving, converting, and outputting mean, the guts of the thought experiment remain the same as if it was just one person.

Darat seems to imply that as long as the machine - the combined whole - is outputting a coherent message then the internalities of the process don't matter, the machine "understands Chinese" regardless of what's happening inside it. That's the concept I'm challenging.
 
Yes, it's an easy one for it:

GPT-4: Since you've defined "blurid" and "celamious" as opposites, we can infer that if a politician condemns celamious behavior, they would likely oppose it. If that same politician opposes celamious behavior, then it stands to reason that they would support behavior that is blurid, as the two are opposites.

We know that AIs are "just" processing text, which is information, but we humans, that is, our brains, are also "just" processing information, in the form of nerve pulses. Remember, it's rather dark inside our skulls.

In both cases, the meaning of those streams of information all has to be learned.


Thanks for testing that out! If it were a human giving that response (perhaps, someone in the process of learning my language) I'd say this demonstrates "understanding of the concept" of opposites.

I note that the AI got a little lost on (or at least, disregarded) the implied multiple negatives at the end of my question, failing to note that the politician supporting blurid behavior should oppose a measure calling for increased penalties for blurid behavior. I would expect a human respondent to lose track of that half the time, too.

As a kid I wondered whether it's possible to learn a written language from a dictionary alone, if that dictionary had no illustrations and there were no related languages to compare with. If it is, one would need to start with words that correlate statistically to definitions in the broadest ways, such as perhaps negations or phrases occurring often in definitions such as "used for." Try to crack the meta-language first, in other words. (If there were recognizable illustrations, the opposite approach would be easier.) An LLM is in similar straits, but has vastly many more usage examples to work from, while not being able to get any benefit from illustrations due to having no real-word knowledge base. (I assume if it did have one, it wouldn't be an LLM.)

We're used to thinking of a web of relationships between words as how definitions work. The web of relationships between words in actual usage goes beyond that: it models the world.
 
I see this the other way around. It is in fact challenging us about what we have meant when we claim people understand something. When a human translator translates something surely they are using an "internal
black box", our "I" - the narrator of our lives - is handed the output and has no knowledge/awareness of how that output was generated.

Well...no, I disagree, that's not really what humans do when they're translating, is it? If I'm in China and using a Chinese-English phrasebook or dictionary, I'm using the book to convert what they're telling me into a language that I know and then I'm interacting with those ideas as if they were spoken to me in English, coming up with my own answer in English, and then converting that answer back into their language. "Oh, he's asking me what I'd like to eat for lunch. Hmm, let me think...I suppose chicken sounds nice right now. So let me translate the phrase 'I would like some chicken' into Chinese..."

That is different from how the Chinese Room works - the instructions don't give me any insight into the messages I'm reading. It's not translating them into another language that I can read and then having me work with them. Is the person asking a question or making a statement? Is the prompt a line from a poem or a request for a baseball statistic? Who knows. I'm just looking at the physical characters, following instructions, and writing other unknown characters on another sheet of paper. Am I writing the actual answer to the question, or just writing the equivalent of "sorry but I don't have that information"? Beats me. The book says (for instance, to simplify) roll a die to select one of these six characters to reply with and so on and I'm just doing that.
 
Sorry Checkmite, this got out of hand quickly, I don't have time now to respond to everything.
But as for understanding .. thousands parts don't understand .. but the system built from them can. If you become part of the system, you don't begin to understand. You don't change. But you just help the system to understand.
Also I'm talking about understanding alone. No "thinking", no "knowing", those would have to be all addressed individually, as those are different things.
 
...snip...

Darat seems to imply that as long as the machine - the combined whole - is outputting a coherent message then the internalities of the process don't matter, the machine "understands Chinese" regardless of what's happening inside it. That's the concept I'm challenging.

Not really - I'm challenging what we are actually describing when we say we understand something.

And again want to emphasis I am not saying our understanding works like ChatGPT's but I think it is giving us clues we can use to try and understand us i.e. human understanding and cogitation.
 
Thanks for testing that out! If it were a human giving that response (perhaps, someone in the process of learning my language) I'd say this demonstrates "understanding of the concept" of opposites.

I note that the AI got a little lost on (or at least, disregarded) the implied multiple negatives at the end of my question, failing to note that the politician supporting blurid behavior should oppose a measure calling for increased penalties for blurid behavior. I would expect a human respondent to lose track of that half the time, too.

As a kid I wondered whether it's possible to learn a written language from a dictionary alone, if that dictionary had no illustrations and there were no related languages to compare with. If it is, one would need to start with words that correlate statistically to definitions in the broadest ways, such as perhaps negations or phrases occurring often in definitions such as "used for." Try to crack the meta-language first, in other words. (If there were recognizable illustrations, the opposite approach would be easier.) An LLM is in similar straits, but has vastly many more usage examples to work from, while not being able to get any benefit from illustrations due to having no real-word knowledge base. (I assume if it did have one, it wouldn't be an LLM.)

We're used to thinking of a web of relationships between words as how definitions work. The web of relationships between words in actual usage goes beyond that: it models the world.

One of the major advances with GPT 4 is that it can "understand" images - you could give it a photo of a Blue Tit and it can describe that photo. (I think a real Turing test would be - when it is first being trained - it giggles every time it sees the word "tit" used .)
 
One of the major advances with GPT 4 is that it can "understand" images - you could give it a photo of a Blue Tit and it can describe that photo. (I think a real Turing test would be - when it is first being trained - it giggles every time it sees the word "tit" used .)
One of the example prompts in the white paper is giving it an image with the prompt "explain why this image is funny." Might be worth a try.
 
One of the major advances with GPT 4 is that it can "understand" images - you could give it a photo of a Blue Tit and it can describe that photo. (I think a real Turing test would be - when it is first being trained - it giggles every time it sees the word "tit" used .)

One of the example prompts in the white paper is giving it an image with the prompt "explain why this image is funny." Might be worth a try.


Kids, being so much less encumbered with social baggage than the adults around them, and so much more perceptive about inconsistencies and hypocrisy, find it much easier to see the humor in borderline 'forbidden' subjects.

"Tits" is a perfect example.
 
I note that the AI got a little lost on (or at least, disregarded) the implied multiple negatives at the end of my question, failing to note that the politician supporting blurid behavior should oppose a measure calling for increased penalties for blurid behavior. I would expect a human respondent to lose track of that half the time, too.

I messed up in not noticing that it messed up in the remainder of its reply (that I didn't include). But after asking it to review its reply:

GPT-4: I apologize for the error in my previous response. I misspoke when I said the politician would likely support a law that increases penalties for blurid behavior.

Given that "blurid" and "celamious" are opposites, if the politician condemns celamious behavior, they would be opposed to it. Since they oppose celamious behavior, it would be consistent for them to support blurid behavior, as it's the opposite.

Therefore, the politician would likely oppose a law that increases the legal penalties for blurid behavior because that law would discourage blurid behavior, something they would presumably support given their condemnation of its opposite, celamious behavior. Thank you for your patience and for pointing out the error.


As a kid I wondered whether it's possible to learn a written language from a dictionary alone, if that dictionary had no illustrations and there were no related languages to compare with. If it is, one would need to start with words that correlate statistically to definitions in the broadest ways, such as perhaps negations or phrases occurring often in definitions such as "used for." Try to crack the meta-language first, in other words. (If there were recognizable illustrations, the opposite approach would be easier.) An LLM is in similar straits, but has vastly many more usage examples to work from, while not being able to get any benefit from illustrations due to having no real-word knowledge base. (I assume if it did have one, it wouldn't be an LLM.)

We're used to thinking of a web of relationships between words as how definitions work. The web of relationships between words in actual usage goes beyond that: it models the world.

Yes, our eyes are a big advantage for spacial reasoning, though just having eyes doesn't give us a free pass: the brain has to start from scratch there too.

I tested this a bit by describing Farside and xkcd cartoons to it in neutral terms, to see if it recognizes the humor-- sometimes it does.
 
I don't think that's really a more accurate description at all; but even if it is, it sounds like you're implying that it's merely the scale of the mechanism that qualifies what it does as genuine "understanding". If none of the few hundred assistants knows what the Chinese characters they're receiving, converting, and outputting mean, the guts of the thought experiment remain the same as if it was just one person.


Okay, let's consider the more accurate (I would rather say my description of the "room" was more realistic) question first. Large language models are based on neural networks with, I'm reading from multiple sources, many millions of nodes and up to tens of billions of connections. Let's say there's a mere one million nodes and ten billion connections.

If you're going to execute the LLM algorithm with a person and paper, which is theoretically quite possible, at the very least you need to have the weight of each interconnection written down on a list. At a hundred entries per page (hope you brought your reading glasses) and a thousand pages per book, that requires a hundred thousand books. Sounds like a library to me. By nature the calculations for a neural network involve multiple parallel computations. You'll want to keep track of the cumulative sums of the inputs of each neuron as you go; at a thousand per white board, you need a thousand white boards. (You can reduce the number of white boards you'd need, at the cost of more work rewriting them more often.)

Now to do the calculations, at ten seconds per interconnection (hopefully the books are well-organized to minimize how often you have to go back to the stacks to fetch new ones, and you're a prodigy at addition and multiplication of high precision real numbers in your head, and of course you never need to eat or sleep or take any breaks at all), will take you 3,169 years. The final steps of sorting the weights of the final layer of nodes and then looking up the matching Chinese character(s) are trivial by comparison. But, I'm pretty sure that's per word of the output, so you'd better clear your schedule for a few hundred millennia if you're planning a conversation or to answer some involved question. And I sure hope it's an important question being answered!

Okay, so you don't absolutely need those hundred assistants to finish before the sun burns out. But I'd bet you'd be glad to have them.

Now, you might be thinking that such a computation-intensive neural network is the wrong way to design the Chinese room. It's certainly not what Searle had in mind. At the time, it was already known that just looking up a list of possible inputs in a book and following some simple if-then and/or random-choice instructions to choose a response is nowhere near adequate to create the semblance of intelligent discourse. (Computer game NPCs work that way.) And that making the book bigger wouldn't work either, even if it filled the room.

What was considered the most likely method for passing the Turing test was to compile a database of facts about the world and then use algorithms of logical deduction to figure out facts or hypotheses about the intent of the input questions and what output would correctly resolve it. That sounds more like something that might fit in a room.

But, it isn't. To be useful for general AI (or the illusion thereof), the database of facts has to include "common-sense knowledge" that requires millions of entries. At a hundred cryptic facts per page, it's still thousands of books. (Why cryptic? Because of course, if the knowledge base was in English you'd learn Chinese in the process.) Which is fine, but it turns out there's no way to tell in advance which facts will be relevant. That kind of logical deduction, the kind used for proving mathematical theorems for instance, works by trial and error, exploring an ever-increasing space of possibilities, and practically every step along the way requires searching through the database and trying each entry. The amount of computation becomes intractable even for machines that can do the LLM neural network calculations routinely. It might seem there should be some clever way around this, but the researchers who worked on it for decades consider the problem so intractable that there have been arguments that it proves AI that can do, well, what it turns out LLMs can do, to be completely impossible. The human brain just "somehow knows" which information is relevant to a problem at hand in a way that a computer can't imitate, according to that argument.

The approach does work where only a smaller database of specialized knowledge is needed, and it can also work when specialized hardware like Watson can muster up enough cycles to manage it. But chat with Watson still quickly reveals more severe limitations than we see with LLMs. And if you were to encode Watson in a room full of books, the necessary steps to run the algorithm by hand would far exceed the neural net version. Then you really would need sun-burns-out amounts of time in the Chinese room.

Why do I harp on that when you've already acknowledged the possibility that maybe you really do need the huge library of books and the hundred assistants, and claimed that doesn't change anything important? Because it's not the scale of the mechanism that matters (that just determines how long it takes, whether it's one person or a hundred assistants, or even ten billion people each doing one calculation nearly simultaneously so the answer would come out lickety-split), but the amount of information the mechanism takes into account. Whether it's hundreds of thousands of books of neural net weights, or tens of thousands of books of facts, or heck, in principle you could do it with one long fixed sequence of printed if-then choose-your-own-adventure instructions in which case the number of books would more than fill the known universe but you'd only have to consult a tiny fraction of them to produce any single response. In all those cases the room doesn't fit in the parameters of Searle's description of it.

And of course, all those conceptual Chinese Rooms (and LLMs) are toys, in terms of data content and parallel computing power, compared to a mature human brain. 80+ billion neurons, 100 trillion connections...

There's nothing strange about a whole system being able to do things that its component parts cannot do individually. What individual part of an airplane can fly? What individual organism can evolve? Which soldier was the one who surrounded the enemy formation?

Darat seems to imply that as long as the machine - the combined whole - is outputting a coherent message then the internalities of the process don't matter, the machine "understands Chinese" regardless of what's happening inside it. That's the concept I'm challenging.


Since this is the science subforum rather than the philosophy one, what test for "understanding" other than the ability to respond coherently to probing questions about the concept(s) in question are you proposing?

Consider the question I suggested for ChatGPT (which Pulvinar kindly tested out for real) about opposites. Or simpler questions like "which two of these words are opposites: cold, fast, dark, hot, night." Or "Write a sentence that uses opposites." Questions/tasks like these are how we test students' understanding of concepts, and we interpret correct answers as demonstrating understanding. How else?
 
One of the major advances with GPT 4 is that it can "understand" images - you could give it a photo of a Blue Tit and it can describe that photo. (I think a real Turing test would be - when it is first being trained - it giggles every time it sees the word "tit" used .)


Okay, cool. I guess that's still an LLM in the same way that a car is still a car, even if it's also a WiFi hotspot.

Years ago when writing about how difficult a task AI vision is to achieve, I claimed that if anyone could develop an AI that could run on an affordable platform, monitor a video feed, and notice whenever a person falls, they'd have a guaranteed lucrative market. I now expect that to happen soon.
 

Back
Top Bottom