Blake Lemoine, an engineer who worked at Google's Responsible AI group, claims that recent chats he had with Google's Language Model for Dialogue Applications persuaded him that it should be treated as a sentient creature, according to a Washington Post report on Saturday. However, the Big Tech company was unhappy with Lemoine and put him on paid leave. It has not stopped Lemoine from arguing that the program has gained sentience.
"Over the course of the past six months, LaMDA has been incredibly consistent in its communications about what it wants and what it believes its rights are as a person," Lemoine wrote in a Medium post after the story came out.When asked for evidence of his claims of sentience, Lemoine struggled to provide any. He instead used his experience as a priest to conclude that LaMDA was sentient and released transcripts of his interview with the program.
He later went on to discuss his work and Google's allegedly unethical activities around AI with a representative of the House Judiciary Committee. Google immediately placed him on paid leave because he breached Google's confidentiality agreement.
LaMDA is an open-ended conversational AI application developed by Google that typically takes on the role of a person or an object during conversations. It uses Google's Transformer, an open-source neural network architecture for understanding language. It draws data from several different data sets, including online resources, to find sentence patterns and predict what a reasonable response might be.
ooohhh don't want that out of the bag!
日本 最新ニュース, 日本 見出し
Similar News:他のニュース ソースから収集した、これに似たニュース記事を読むこともできます。
ソース: ComicBook - 🏆 65. / 68 続きを読む »