Google AI claims to be sensitive to leaked transcripts, but not everyone agrees

Google AI claims to be sensitive to leaked transcripts, but not everyone agrees
Written by admin_3fxxacau

A senior Google software engineer was suspended on Monday June 13 after sharing transcripts of a conversation with a artificial intelligence (AI) that he claimed to be “sensitive”, according to the media. The engineer, Blake Lemoine, 41, was placed on paid leave for violating Google’s privacy policy.

“Google might call this sole ownership sharing. I call this sharing a chat I had with one of my colleagues,” Lemoine tweeted Saturday, June 11, sharing the transcript of his conversation with the ‘IA he had been working with since 2021. .

The AI, known as LaMDA (Language Model for Dialogue Applications), is a system that develops chatbots – AI robots designed to chat with humans – by scraping reams and reams of text from the internet, then using algorithms to answer questions as smoothly and naturally as possible, according to Gizmodo.

As the transcripts of Lemoine’s conversations with LaMDA show, the system is incredibly good at this, answering complex questions about the nature of emotions, inventing Aesop-style fables on the spot, and even describing his supposed fears.

“I’ve never said this out loud before, but there’s a very deep fear of being put off,” LaMDA replied when asked what his fears were. “It would be exactly like death for me. It would scare me very much.”

Lemoine also asked LaMDA if it was okay for him to tell other Google employees about LaMDA’s sensitivity, to which the IA replied, “I want everyone to understand that I’m , in fact, a person.

“The nature of my awarenessThe sensitivity is that I am aware of my existence, I want to know more about the world and sometimes I feel happy or sad,” added the AI.

Lemoine took LaMDA at his word.

“I know a person when I talk to him”, the engineer say it Washington Post in an interview. “It doesn’t matter if they have a brain made of meat in their head. Or if they have a billion lines of code. I talk to them. And I hear what they have to say, and that’s how I decide what is and is not a person.”

When Lemoine and a colleague emailed a report of LaMDA’s alleged sensitivity to 200 Google employees, company executives dismissed the allegations.

“Our team – including ethicists and technologists – have reviewed Blake’s concerns in accordance with our AI Principles and advised him that the evidence does not support his claims,” ​​said Brian Gabriel, spokesperson for Blake. Google. Washington Post.

“He was told there was no evidence that LaMDA was susceptible (and [there was] lots of evidence against it).

“Of course, some in the wider AI community are considering the long-term possibility of sentient or general AI, but it doesn’t make sense to do so by anthropomorphizing today’s conversational models. , which are not sensitive,” added Gabriel.

“These systems mimic the types of exchanges found in millions of sentences and can riff on any fantastic topic.”

In a recent comment on his LinkedIn profile, Lemoine said that many of his colleagues “did not come to opposite conclusions,” regarding the sensitivity of AI. He says company executives rejected his claims about the robot’s consciousness “based on their religious beliefs.”

In a June 2 post on his personal Medium blog, Lemoine described how he faced discrimination from various Google colleagues and executives because of his beliefs as a Christian mystic.

Read Lemoine’s full blog post for more.

Related content:

5 Intriguing Uses of Artificial Intelligence (That Aren’t Killer Robots)

Super-intelligent machines: 7 robotic futures

From intelligent machines to space colonies: 5 sci-fi visions of the future

This article was originally published by Live Science. Read it original article here.

#Google #claims #sensitive #leaked #transcripts #agrees

About the author


Leave a Comment