(Natural News) Shares in Alphabet, the parent company of Google, fell by 7.7 percent on Wednesday, causing it to lose a remarkable $100 billion from its market value after its new AI chatbot gave an inaccurate response to a question in a public demo this week.
Google’s new AI chatbot tool, which is known as Bard, has not been released to the public yet but had been the subject of significant hype – at least until the disastrous demo that the company posted on Twitter this week.
In the demo, a user asks Bard the question: “What new discoveries from the James Webb Space Telescope can I tell my 9 year old about?”
The AI tool then gives the user a response that contains several bullet points about the telescope. One of them claims: “JWST took the very first pictures of a planet outside of our own solar system.”
However, NASA reports that the first image of a planet beyond our own solar system, known as an exoplanet, was not taken by the James Webb Space Telescope. Instead, it was taken by the European Southern Observatory’s Very Large Telescope back in 2004.
This very public embarrassment highlights Google’s struggle to keep up with the AI technology ChatGPT that is getting a lot of positive attention for its rivals. ChatGPT can be used to create responses to questions that people typically search for using Google, as well as essays and even song lyrics. It is enjoying a sudden surge in popularity that reportedly spurred Google’s management to push its own version out as soon as possible.
Google’s event took place just a day after Microsoft had announced it would be powering its search engine Bing with a more advanced rendition of the artificial intelligence used by ChatGPT.
AI is prone to errors
Some observers believe that conversational AI will mark a radical change in the way that people search online, but the Bard fiasco could cause Google’s search engine’s reputation to take a big hit after providing unreliable information.
Bard, much like ChatGPT, is built on a large language model. This means it has been trained using huge troves of online data to help it come up with compelling and realistic-sounding responses to user prompts. While many of these tools do provide answers that sound reasonably natural and conversational, they also have the power to spread inaccurate information.
For now, Google is trying to do some damage control, saying that the incident will help them improve the project. In a statement, a Google spokesperson said: “This highlights the importance of a rigorous testing process, something that we’re kicking off this week with our Trusted Tester program.
”We’ll combine external feedback with our own internal testing to make sure Bard’s responses meet a high bar for quality, safety and groundedness in real-world information.”
While incorrectly identifying the name of a satellite that took a specific photograph may seem harmless on the surface, what happens when Google’s Bard gives people inaccurate information about matters like rendering first aid or provides incorrect directions about carrying out home improvement projects that can put individuals in danger?
The problem is that many of the answers that these chatbots provide sound so convincing that it is hard for people to tell when they are inaccurate. The appeal of these AI-driven searches is their ability to provide results to queries in plain language instead of presenting a list of links, helping connect people with answers faster.
However, in addition to concerns about accuracy, these systems are being criticized for their vulnerability to inherent biases in their algorithms that can skew their results. When used on a mass scale, the potential to spread false information is staggering. The tech news site CNET recently had to take down 77 articles that it wrote using an AI tool that were found to have major factual inaccuracies and plagiarism. AI chatbots are designed to essentially make things up to fill in gaps, and if they are widely adopted, it may soon be more difficult than ever to tell fact from fiction online.
Sources for this article include: