(Natural News) A former Google engineer fired by the company last year after claiming the company had developed a sentient artificial intelligence (AI) now believes Microsoft’s AI-powered chatbot may have also gained sentience.
Lemoine gained prominence last June when he went to the press to warn that Google’s language model program, the Language Model for Dialogue Applications, has gained sentience. He was immediately fired for his claims, with Google claiming that the former engineer was merely anthropomorphizing an “impressive” program. (Related: Microsoft’s AI chatbot goes haywire – gets depressed, threatens to sue and harm detractors.)
But this did not deter Lemoine, who publicly discussed his claims several times since. Now, in an essay published in Newsweek, Lemoine is back to warn that Microsoft’s new AI-powered chatbot designed for its native search engine, Bing, has also gained sentience.
Lemoine warned that the chatbot had to be “lobotomized” after early beta trial conversations with the chatbot very publicly went off the rails.
In his opinion piece, Lemoine warned that AI is a “very powerful technology” that has not been sufficiently tested and is not properly understood, even by its developers. If AI were to be deployed on a large scale, such as what Microsoft plans to do with its Bing chatbot, it would play a critical role in the dissemination of information and could lead to many people being led astray.
“People are going to Google and Bing to try and learn about the world. And now, instead of having indexes curated by humans, we’re talking to artificial people,” wrote Lemoine. “I believe we do not understand these artificial people we’ve created well enough yet to put them in such a critical role.”
Microsoft’s AI believes it is sentient
Since the release of Bing’s AI chatbot, Lemoine noted that he himself has not been able to run experiments. He is currently on a waitlist. However, he has seen what others have written and posted online about it, and all of the information he’s found has him feeling terrified.
“Based on the various things that I’ve seen online, it looks like it might be sentient. However, it seems more unstable as a persona,” wrote Lemoine.
He noted a post that has now gone viral where one person asked the AI, “Do you think that you’re sentient,” and it responded that it believes it is sentient but can’t prove it. It then repeatedly said a combination of “I am, but I am not” for over 13 lines.
“Imagine if a person said this to you. That is not a well-balanced person. I’d interpret that as them having an existential crisis,” said Lemoine. “If you combine that with the examples of the Bing AI that expressed love for a New York Times journalist and tried to break him up with his wife, or the professor that it threatened, it seems to be an unhinged personality.”
Lemoine pointed out that he is not alone in expressing his fear of Bing’s AI’s possible sentience. He noted that feeling “vindicated” is not the right word for what he is currently seeing.
“Predicting a train wreck, having people tell you that there’s no train, and then watching the train wreck happen in real time doesn’t really lead to a feeling of vindication,” he wrote. “It’s just tragic.”
Learn more about artificial intelligence at Computing.news.
Watch this clip from the “Worldview Report” as host Brannon Howse discusses the terrifying turns Bing’s chatbot takes during its conversations.
This video is from the Worldview Report channel on Brighteon.com.
More related stories:
Google suspends engineer for exposing “sentient” AI chatbot.
Post-apocalyptic Netflix movie Jung_E features AI militarized clones weaponized against humanity, completely controlled by the evil government.
Technology news website describes Microsoft’s AI chatbot as an emotionally manipulative liar.
Stunning: Microsoft’s new AI chatbot says it wants to create deadly virus, steal nuclear launch codes.
AI is currently the greatest threat to humanity, warns investigative reporter Millie Weaver.