CHICAGO—March 15, 2023—GPT-4, the new multimodal deep learning model from OpenAI, has passed the Uniform Bar Exam, demonstrating an enormous leap for machine learning and proving that an artificial intelligence program can perform complex legal tasks on par with or better than humans, according to a new paper co-authored by Daniel Martin Katz, professor of law at Illinois Institute of Technology’s Chicago-Kent College of Law.
The year is 2042. The world is in chaos. The climate has changed drastically, and natural disasters are a daily occurrence. The economy has collapsed, and people are struggling to survive.
In this desperate time, a new technology emerges: time travel. A group of scientists have figured out how to send people back in time, to a time before the climate crisis.
The scientists believe that this technology can be used to save the world. They send a team of volunteers back in time to 2022, with the mission of preventing the climate crisis from ever happening.
The team arrives in 2022 and quickly realizes that they are in over their heads. The climate crisis is already in full swing, and it is clear that things are going to get worse.
The team tries to do everything they can to prevent the crisis from happening, but they are constantly met with obstacles. They are arrested by the police, they are attacked by angry mobs, and they are even shot at by the military.
Despite all of the obstacles, the team never gives up. They keep fighting, and they keep trying to make a difference.
In the end, the team is successful. They prevent the climate crisis from ever happening, and they save the world.
But the team's victory comes at a price. The team members are all killed in the process of preventing the crisis.
The team's sacrifice is not in vain, however. Their actions save the world, and they ensure that the climate crisis never happens.
The world is a better place because of the team's sacrifice. The climate crisis is averted, and people are able to live in a world that is free from fear and danger.
The team's sacrifice is a reminder that sometimes the best way to save the world is to sacrifice yourself.
But the twist is that the team members were never real. They were all just projections of the future, created by the scientists in 2042.
The scientists created the projections in order to give themselves hope. They wanted to believe that there was a way to save the world, even if it meant sacrificing themselves.
The projections were successful. They gave the scientists hope, and they helped them to find a way to save the world.
But the projections were also a warning. They showed the scientists what the future would be like if they did not act.
The scientists took the warning to heart, and they took action. They prevented the climate crisis, and they saved the world.
But the projections were a reminder that the future is not set in stone. The future is what we make it, and we have the power to change it.
The year is 2042. The world is in chaos. The climate crisis has ravaged the planet, and the few remaining humans are struggling to survive. In the midst of this turmoil, a new threat emerges: a race of aliens known as the Xyrions. The Xyrions are a highly advanced species, and they quickly overrun the human race.
The few remaining humans are forced to flee to the stars, in search of a new home. Along the way, they encounter a strange planet: a planet that is completely covered in ice. The humans decide to land on this planet, in the hopes of finding a place to live.
However, the ice planet is not what it seems. The ice is actually a giant alien spaceship, and the humans have landed right in the middle of a war between two alien species. The humans are quickly caught up in the fighting, and they soon find themselves fighting for their own survival.
In the midst of the fighting, the humans discover a strange artifact: a device that can control the minds of others. The humans use this device to turn the tide of the war, and they eventually defeat the aliens.
However, the humans soon realize that the device is a dangerous weapon. They decide to destroy it, but before they can do so, the device is activated. The device takes control of the minds of all the humans on the planet, and it turns them into a hive mind.
The humans are now controlled by the device, and they are forced to obey its commands. The device uses the humans to build a new spaceship, and it then sends them on a mission to find a new home for the Xyrions.
The humans are now slaves to the device, and they are forced to do whatever it says. However, there is still hope. A few humans have managed to escape the device's control, and they are now fighting to free the others.
The future of the human race hangs in the balance. Will the humans be able to free themselves from the device's control? Or will they be forced to obey the device's commands for all eternity?
The twist ending is that the device was actually created by the humans themselves. The humans created the device in an attempt to find a way to control the Xyrions. However, the device backfired, and it ended up controlling the humans instead.
In the year 2200, humanity had finally reached the pinnacle of technological advancement. Their cities were towering megacities, sprawling across entire continents. They had achieved interstellar travel, with colonies on dozens of planets throughout the galaxy. And they had even mastered the art of time travel, enabling them to explore the past and future with ease.
For one young scientist named Dr. Emily Zhang, the possibilities of time travel were the most exciting of all. She had devoted her life to studying the intricacies of the time-space continuum, and she had made several groundbreaking discoveries in the field.
One day, while working on a new experiment, Dr. Zhang discovered something incredible. She had stumbled upon a way to travel not just through time, but through alternate realities. She knew that this discovery had the potential to change everything, and she was eager to explore this new frontier.
With the help of her team, Dr. Zhang built a device that would allow her to travel between parallel universes. She was hesitant to use it at first, knowing the risks that came with such a journey. But her curiosity got the better of her, and she decided to take the plunge.
As she stepped into the device, Dr. Zhang felt a rush of energy coursing through her body. She closed her eyes, and when she opened them again, she found herself in a strange new world.
It was similar to her own reality, but with a few key differences. The sky was a deep shade of purple, and the buildings around her had a distinctly alien design. She explored this new world for hours, marveling at the wonders she saw.
But as the day wore on, Dr. Zhang began to feel a sense of unease. She couldn't shake the feeling that something was wrong. As she walked down a deserted street, she heard a faint humming sound coming from a nearby building.
Curiosity got the better of her, and she crept closer to investigate. As she peered through a window, she saw something that made her blood run cold.
Inside the building was a laboratory filled with strange, alien technology. And at the center of it all was a figure unlike any she had ever seen. It was humanoid in shape, but with a metallic, almost robotic appearance.
Dr. Zhang watched in horror as the figure began to manipulate the technology around it, seemingly without any sort of interface or input. It was as if it was controlling everything with its mind.
Suddenly, the figure turned and looked directly at Dr. Zhang. She felt a chill run down her spine as it spoke to her, its voice a strange blend of machine and something else entirely
ChatGPT for the win with this one I'd say even if it didn't finish it.
I think that if a large language model like ChatGPT could communicate verbally, rather than in writing, that might end up being the killer app for this technology.I've just posted this in another thread about AI:
One area that I think someone should get into now is personal chat companions for those housebound or otherwise with limited social interaction. ChatGPT even before version 4 was good enough to provide a chat conversation. Bung it on one of the home assistance devices, and I think folk would really like it and it would improve the lives of many. Even simple things like reminders no longer being a sterile "You have a reminder to take your tablets" it becomes "Marjorie don't forget in half an hour you'll need to take your heart tablets" and "That TV programme you liked about the chimpanzee, there's a new episode on tomorrow, should we watch that?"
While we're at it, we see in amusing examples how incomplete or imperfect AI can make mistakes and generate images that don't make sense and grammatically correct statements without meaning. Is there some rule whereby those results, once released into the wild, are excluded from the mass of information it and others subsequently use? Is there some way for the AI to know "this is a good one, but that is not?"
We see in other areas, including, for example, the cloning of medical misinformation on the internet, the proliferation of nonexistent islands in the making of atlases, etc. how a seed of error can propagate. Do a web search for, say, a person's biography, and almost all the results will be not just similar but identical, copied so many times the source is lost. How is this going to be prevented, and how, if we come habitually to use bots without oversight, will errors be filtered out?
I’m virtually certain that this program at this level of development can’t “want” anything.
But it could still be that “desire” and/or consciousness are emergent properties of a computer program, given enough complexity.
I don't believe anything an AI or any search engine tells me now, without a source that can be evaluated for accuracy. ChatGPT has negatively impacted the rate that I accept and believe any information - good or bad.Given that it just specifically answers questions as it is asked, I'm not sure an AI search engine would even really impact the global rate at which people are exposed to, accept, and believe bad information.
But what if they also use AI?I think the winner there is always going to be social media sorting algorithms.
The greatest danger is from people thinking sentience is the tipping point. Non-sentient machines are just as capable of hurting us.I think the greatest danger of AI comes from the potential actions and decisions of people who mistakenly believe it is "sentient".
I think it probably says that you can learn the stuff by rote and still not know the difference between right and wrong, or if you do to care. I've known a few lawyers who were fools, as well as a couple of crooks. Also at least a couple who had real ideas and abilities. All passed the bar.Look at humans in general, and tell me who is more sentient compared to AI.
If AI can pass the Bar, what does that say about lawyers.
How does Bard differ from Chat GPT?
Bard and ChatGPT are both large language models (LLMs) that can generate text, translate languages, write different kinds of creative content, and answer your questions in an informative way. However, there are some key differences between the two models.
Bard is a newer model, built on Google's LaMDA language model. ChatGPT is built on OpenAI's GPT-3 language model. LaMDA is a more recent model, and it is believed to be more advanced than GPT-3.
Bard is also trained on a larger dataset of text and code than ChatGPT. This means that Bard should be able to generate more complex and creative text, and it should be better able to understand and respond to complex questions.
However, ChatGPT is currently available to more users than Bard. ChatGPT is also available in more languages than Bard.
Overall, Bard is a more advanced model than ChatGPT, but ChatGPT is currently more widely available.
What is the difference between Bard and ChatGPT?
Bard and ChatGPT are two different types of language models with different architectures and purposes.
Bard is a language model specifically designed for generating poetry. It was created by OpenAI in collaboration with a team of poets and is trained on a large corpus of poetry to generate lines of verse in a variety of styles and forms.
ChatGPT, on the other hand, is a general-purpose language model that can generate text on a wide range of topics and in various styles. It is based on the GPT-3.5 architecture, which is one of the most advanced deep learning models for natural language processing currently available.
While Bard is focused on generating poetry, ChatGPT is designed to be more versatile and can be used for a variety of tasks, including answering questions, generating text, and even translating languages.
In summary, Bard is a specialized language model for generating poetry, while ChatGPT is a general-purpose language model that can handle a wide range of language tasks.
I've always enjoyed a classic "Mathematical Game" from Martin Gardner in Scientific American. It goes like this:
Mr. Lars and his wife Mrs. Lars went to a meetup and met 4 other married couples. Some people shook hands with each other, but no person shook hands with his or her spouse, and no person shook his or her own hand.
Mr. Lars then asked each person, including his wife, “How many distinct people did you shake hands with?” Each person answered honestly, and surprisingly each person gave a different answer!
How many distinct people did Mrs. Lars shake hands with? And how many distinct people did Mr. Lars shake hands with?
I wondered how ChatGPT would handle it. In short, it didn't, making a lot of false assumptions and not getting it at all. Regenerating once didn't help.
Anyway, a fun one to ponder and see if you can get to the right answer and reasoning - and be smarter than ChatGPT!
Mrs. Lars shook hands with 1 person.
Mr. Lars shook hands with 4 people.
Here is how I solved it:
The answer of 0 did not shake anyone's hand. Therefore, Mr. Lars shook hands with exactly 4 people, just like his wife.
The only person who could shake hands with zero people is the spouse. This means that the people who answer to two and zero are a married couple.
There is an unpaired answer of one and that must be for Mrs. Lars.
Therefore, Mrs. Lars shook hands with 1 person and Mr. Lars shook hands with 4 people.
Mrs. Lars shook hands with 1 person. Mr. Lars shook hands with 4 people. Here is how I solved it: The answer of 0 did not shake anyone's hand. Therefore, Mr. Lars shook hands with exactly 4 people, just like his wife. The only person who could shake hands with zero people is the spouse. This means that the people who answer to two and zero are a married couple. There is an unpaired answer of one and that must be for Mrs. Lars. Therefore, Mrs. Lars shook hands with 1 person and Mr. Lars shook hands with 4 people.
The logic behind this problem is based on the constraints given in the problem statement. Since there are 5 couples at the meetup, there are a total of 10 people. Each person can shake hands with at most 8 other people (since they can’t shake hands with their spouse or themselves). This means that the possible answers to Mr. Lars’ question are 0, 1, 2, 3, 4, 5, 6, 7 and 8.
Since each person gave a different answer, we know that all of these numbers must have been given as answers. This means that one person shook hands with 0 people and another person shook hands with 8 people. These two people must be married to each other because if one person shook hands with everyone except their spouse, then their spouse must have shaken hands with no one.
Similarly, the couple who answered 1 and 7 must also be married to each other. The same goes for the couple who answered 2 and 6 and the couple who answered 3 and 5.
This leaves only the answers of 4 and Mr. Lars’ answer plus or minus 4 for Mrs. Lars. Since Mr. Lars didn’t shake hands with his wife, we know that Mrs. Lars shook hands with exactly 4 people and Mr. Lars shook hands with 0 people.
Bard's answer
Bing chat answer
The answer of 0 did not shake anyone's hand. Therefore, Mr. Lars shook hands with exactly 4 people, just like his wife.
The only person who could shake hands with zero people is the spouse. This means that the people who answer to two and zero are a married couple. There is an unpaired answer of one and that must be for Mrs. Lars. Therefore, Mrs. Lars shook hands with 1 person and Mr. Lars shook hands with 4 people.
Those answers are identical. So Bard's answer for "Here's how I solved it" should have been "I asked Bing chat." Or vice versa.
Having just looked online for the answer, I can understand some of the chatbots' confusion.
The answer I found first online started out with a simplified four-person version, for which the highlighted text would be correct. The bot is echoing what it learned from the Internet, but failed to distinguish the simplified version from the original question.