It has been brought into the spotlight by the recent suspension of a Google worker who claimed that a computer chatbot he worked on had become intelligent and was starting to think like a person.
Blake Lemoine, a Google “collaborator,” and the company’s LaMDA (language model for dialogue applications) chatbot developing system were put on leave last week after Lemoine leaked recordings of their interactions.
Google’s responsible AI developer, Lemoine, defined the system that he has been working on since last autumn as sentient, with a sense of and capacity to communicate thoughts and emotions comparable to those of a human kid.
A GoogleDoc titled “Is LaMDA sentient?” was created by Lemoine in April, and he shared his results with the company’s leadership. For his role in Google’s Responsible AI team last autumn, 41-year-old engineer told The Washington Post that he started conversing with the interface LaMDA.
The fact that the AI has portrayed itself as a sentient person prompted him to inquire into religion, morality, and robotics rules. Mr. LaMDA’s desire to be recognized as an employee of Google instead of property was clearly appreciated throughout our talk, he stated.
Lemoine was put on leave
Several “aggressive” attempts were allegedly made by Lemoine, a seven-year Google veteran with substantial knowledge in customization algorithms, according to the Post.
Newspaper reports that they’re looking at hiring a counsel for LaMDA and talking to members of the House justice committee about Google’s purportedly immoral practices.
For violating confidentiality regulations, Google suspended Lemoine and claimed in a statement that he was hired as a software engineer, not an ethicist. This was a statement from Google.
Additionally, a Google official named Brad Gabriel vigorously refuted Lemoine’s assertions that LaMDA was capable of becoming sentient.
However, the occurrence and Lemoine’s suspension for a confidentiality violation raises doubts about the openness of AI as a proprietary idea.