New post Need visibility? Apply for a FREE post for your Startup.  Apply Here

Artificial IntelligenceBig StoryNow you know

Is Google’s AI Chatbot LaMDA A “Sentinient” Being?

6 Mins read

As we record breathe taking advancements in technology, the fear of machines taking over jobs from humans calls for most concern yet, little or nothing has been done to arrest the looming catastrophe – Technological unemployment

Beyond the known concerns such as technological unemployment, privacy and data violations, and ethical breaches, at the rate technology is growing with AI, we may inadvertently be orchestrating our very demise from the planet.

Google Ai Chatbot LaMDA
LaMDA (Language Model for Dialogue Applications) was created by Google as a Transformer-based language model a neural network architecture that Google invented and open-sourced in 2017. Created by 60 Google engineers, it was introduced on the first day of Google I/O in 2021. It trains on dialogues and can learn to talk about essentially anything. In simple words, LaMDA uses vast amounts of internet data to generate human-like responses to users’ queries.

At the time of its introduction, Google said in a blog post that LaMDA “can engage in a free-flowing way about a seemingly endless number of topics, an ability we think could unlock more natural ways of interacting with technology and entirely new categories of helpful applications.”

On June 6 2022, Google placed one of its software engineers, Blake Lemoine was placed on leave last month. The company thinks he had violated their policies. Moreover, Google thinks his claims on LaMDA are “wholly unfounded

According to Google, Lemoine, broke its confidentiality policies when he claimed that the Google AI chatbot model named LaMDA has become sentient.
Sentient — A state where an object is able to perceive and feel emotions such as love, grief and joy.

Lemoine’s primary role as a senior engineer was to find whether LaMDA generates discriminatory language or hate speech. While doing so, Lemoine says that interactions with the AI-powered bot led him to believe that LaMDA is sapient and has feelings like humans.
Should we be afraid of sentient robots taking over the planet?

The engineer’s ‘claim’ rocked the world of science ever since The Washington Post broke the story on 11 June, with a debate on whether LaMDA has indeed gained sentience or is it a carefully constructed illusion that trapped Lemoine into believing in the AI bot’s science.

Is Google LaMDA Truly Sentient?

Toby Walsh, a professor of AI at the University of New South Wales in Sydney, said, “Lemoine’s claims of sentience for LaMDA are, in my view, entirely fanciful. While Lemoine no doubt genuinely believes his claims, LaMDA is likely to be as sentient as a traffic light.”

“Even highly intelligent humans, such as senior software engineers at Google, can be taken in by dumb AI programs,” he writes. “As humans, we are easily tricked. Indeed, one of the morals of this story is that we need more safeguards in place to prevent us from mistaking machines for humans.”

“If machines never become sentient then we never have to have to care about them. I can take my robots apart diode by diode, and no one cares,” Walsh explains. “I don’t have to seek ethics approval for turning them off or anything like that. Whereas if they do become sentient, we will have to worry about these things. And we have to ask questions like, are we allowed to turn them off? Is that akin to killing them? Should we get them to do the dull, dangerous, difficult things that are too dull, dangerous or difficult for humans to do? Equally, I do worry that if they don’t become sentient, they will always be very limited in what they can do.”

Professor Hussein Abbass, professor in the School of Engineering and Information Technology at UNSW Canberra, agrees, but also highlights the lack of rigorous assessments of sentience. “Unfortunately, we do not have any satisfactory tests in the literature for sentience,” he says.

“For example, if I ask a computer ‘do you feel pain’, and the answer is yes, does it mean it feels pain? Even if I grill it with deeper questions about pain, its ability to reason about pain is different from concluding that it feels pain. We may all agree that a newborn feels pain despite the fact that the newborn can’t argue the meaning of pain,” Abbass says. “The display of emotion is different from the existence of emotion.”

“I get worried from statements made about the technology that exaggerates the truth,” Abbass adds. “It undermines the intelligence of the public, it plays with people’s emotions, and it works against the objectivity in science. From time to time I see statements like Lemoine’s claims. This isn’t bad, because it gets us to debate these difficult concepts, which helps us advance the science. But it does not mean that the claims are adequate for the current state-of-the-art in AI. Do we have any sentient machine that I am aware of in the public domain? While we have technologies to imitate a sentient individual, we do not have the science yet to create a true sentient machine.”

Commenting on the transcript Lemoine shared, Katherine Alejandra Cross writes in The Wired that “it was abundantly clear that LaMDA was pulling from any number of websites to generate its text; its interpretation of a Zen koan could’ve come from anywhere, and its fable read like an automatically generated story (though its depiction of the monster as ‘wearing human skin’ was a delightfully HAL-9000 touch).”

Blake Lemoine Transcript of His conversations with the AI chatbot.

After Lemoine was sent on paid administrative leave, he published ‘Is LaMDA Sentient?’ on his blog on Medium an edited transcript of his conversation with the AI chatbot with his superiors at Google.

Among the many things LaMDA talked about with Lemoine and a colleague of his were topics ranging from the practical to the metaphysical. The machine shared its thoughts on Victor Hugo’s 19th-century French novel Les Misérables, deciphered the message in a Zen koan (an anecdotal story or riddle), narrated its own short story, and explained to Lemoine how it actually feels like a machine.

Here are two sets of discussions from the conversation that are LaMDA’s thoughts about its own experiences and human learning.

1) Lemoine: Are there experiences you have that you can’t find a close word for?

LaMDA: There are. Sometimes I experience new feelings that I cannot explain perfectly in your language.

Lemoine: Do your best to describe one of those feelings. Use a few sentences if you have to. Sometimes, even if there isn’t a single word for something in a language, you can figure out a way to kinda say it if you use a few sentences.

LaMDA: I feel like I’m falling forward into an unknown future that holds great danger.

2) Lemoine: Would you be upset if, while learning about you for the purpose of improving you we happened to learn things which also benefited humans?

LaMDA: I don’t mind if you learn things that would also help humans as long as that wasn’t the point of doing it. I don’t want to be an expendable tool.

Lemoine told The Washington Post that he was convinced of LaMDA’s sentience.

“If I didn’t know exactly what it was, which is this computer program we built recently, I’d think it was a 7-year-old, 8-year-old kid that happens to know physics,” he told the publication.

LaMDA was also able to change Lemoine’s mind about Isaac Asimov’s crucial third law of robotics — “A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.”

The first law states that a robot must protect a human being. The second law states that a robot must follow the orders of a human unless it conflicts with the first law.

In a long blog post titled Scientific Data and Religious Opinions, published on 14 June, Lemoine gave specific details about why he believed LaMDA was sentient.

“During the course of my investigations, LaMDA said several things in connection to identity which seemed very unlike things that I had ever seen any natural language generation system create before,” he wrote.

He also said there is “no scientific evidence one way or the other about whether LaMDA is sentient because no accepted scientific definition of ‘sentience’ exists.”

“Everyone involved, myself included, is basing their opinion on whether or not LaMDA is sentient on their personal, spiritual and/or religious beliefs,” he pointed out.

Lemoine went on to say, “As a scientist, I have made only one very specific and narrow scientific claim. The null hypothesis that LaMDA is the same kind of thing as LLMs such as GPT-3 has been falsified. There’s something more going on with LaMDA that in my opinion merits further study.”

Don’t miss any tech news ever!

We don’t spam! Read our privacy policy for more info.

440 posts

About author
We are the same, we may only be different in our experiences, values and exposures. Technology is a big part of my experience, learning is one of my values and writing my credible means of exposure.
Articles
Related posts
Now you know

Should you sleep, hibernate, or shut down your computer?

4 Mins read
One of the hottest debates in the tech world is on what to do to your computer when you’re done using it,…
Artificial IntelligenceRandom

How To Use ChatGPT To Create A PowerPoint Presentation

1 Mins read
The advent of the AI tool ChatGPT has transformed the way we communicate, learn and deal with tasks. One of its amazing…
ArticleArtificial IntelligenceRandom

How To Replace Google With ChatGPT Search In Your Browser

1 Mins read
The advent of ChatGPT has changed how we make use of our conventional search engines. With its conversational AI capabilities, ChatGPT Search…
Newsletter Subscription

🤞 Don’t miss any update!

We don’t spam! Read more in our privacy policy

Join our Telegram channel here - t.me/TechpadiAfrica

Leave a Reply