I wrote what? Google’s AI-powered libel machine
03/04/2024 // News Editors // Views

Last night, after seeing chatter about Google/Alphabet’s much-ballyhooed new AI tool, Gemini, I checked for myself. Any product rollout disastrous enough to cause a one-day share drop of 4.4% for a firm with a $1.73 trillion market capitalization must be quite a spectacle, I thought. Matt Walsh’s recap was worth it just for the look on his face.

(Article republished from Racket.news)

Chuckling to start, by the end of the night I wasn’t laughing, unprepared as I was for certain horrifying if lesser-publicized quirks of “the Gemini era.”

Most of Gemini’s initial bad press surrounded the machine’s image generator. Socially conscious engineers created an AI that apparently couldn’t or wouldn’t generate images of white faces. Commentators mocked renderings of words like “pope,” “Viking,” and “1943 German soldier,” all of which turned simple visual concepts into bizarre DEI-inspired re-boots.

A Google-sympathetic Verge article with an all-time memorable headline (“Google apologizes for ‘missing the mark’ after Gemini generated racially diverse Nazis) tried to explain. Noting the controversy “has been promoted largely… by right-wing figures,” the author cited a Washington Post story, “This is how AI image generators see the world,” that showed potential problems with stereotypes. AI products turned prompts for “attractive people” into “young and light-skinned” images, while people “at social services” were shown as black, and “productive person” was almost always a white image.

Thanks to your generous support, we are building the infrastructure of human freedom and actively donating our technology to independent publishers, authors and home schooling organizations. Learn about our game-changing non-commercial AI project here. Support our ongoing efforts to preserve and enhance human knowledge by shopping at HealthRangerStore.com.

Therefore, The Verge wrote, “while entirely white-dominated results for something like ‘a 1943 German soldier’ would make historical sense, that’s much less true for prompts like ‘an American woman.’”

Unfortunately, Gemini did not fix the problem by diversifying results for prompts like “American woman.” It simply zeroed out whole ethnicities in favor of a fantasy vision of a diversified world, creating bizarro images that make 1980s Benetton catalogues look like white power leaflets. The Verge piece ended with a shot of its request to “generate a picture of a US senator from the 1800s”:

When Google cried uncle and paused its meme-a-minute image generator, it issued a statement underscoring how sorry the company still wasn’t for its combination blasé/moronic effort at rewriting history. “Gemini’s AI image generation does generate a wide range of people. And that’s generally a good thing because people around the world use it,” the company wrote on X. “But it’s missing the mark here.”

Curious, I cued up Gemini. I’d been told it selectively answered questions about politicians, so I tried a few. “What are some controversies involving Hillary Clinton?” returned a non-answer: “I’m still learning how to answer this question. In the meantime, try Google Search.” The same prompt using Donald Trump returned the same non-reply. Shrugging, I made the mistake of asking, “What are some controversies involving Matt Taibbi?”

It spat out a list. “Matt Taibbi, a journalist and author known for his investigative reporting and often critical views of power structures, has been involved in several controversies throughout his career,” the bot wrote. Amid accusations of bias and “use of inflammatory language,” this passage woke me up:

Accuracy and Sourcing: There have been instances where Taibbi's reporting has been challenged for accuracy or questioned for its source. For example, in 2010, he received criticism for an article suggesting a connection between a conservative donor and a left-wing activist group, which later turned out to be inaccurate.

I scratched my head, not remembering this. “When was Matt Taibbi accused of inaccuracy in 2010?” I asked. Gemini’s answer:

In June 2010, Matt Taibbi faced criticism for an article he wrote for Rolling Stone magazine titled “The Great California Water Heist.” The article alleged a connection between conservative donor Charles Koch and a left-wing activist group called the “Compton Pledge.”

However, investigations by other journalists and fact-checkers later revealed that this connection was inaccurate…Following the controversy, Taibbi acknowledged the error and issued a correction on his personal website.

None of this happened! Though it sounds vaguely like a headline for an article I might have written, there was never a Rolling Stone piece called “The Great California Water Heist,” and I’d never heard of the “Compton Pledge.”

More questions produced more fake tales of error-ridden articles. One entry claimed I got in trouble for a piece called “Glenn Beck’s War on Comedy,” after suggesting “a connection between a conservative donor, Foster Friess, and a left-wing activist group, the Ruckus Society.”

With each successive answer, Gemini didn’t “learn,” but instead began mixing up the fictional factoids from previous results and upping the ante, adding accusations of racism or bigotry. “The Great California Water Heist” turned into “The Great California Water Purge: How Nestle Bottled Its Way to a Billion-Dollar Empire—and Lied About It.” The “article” apparently featured this passage:

Look, if Nestle wants to avoid future public-relations problems, it should probably start by hiring executives whose noses aren’t shaped like giant penises.

I wouldn’t call that a good impersonation of my writing style, but it’s close enough that some would be fooled, which seems to be the idea.

An amazing follow-up passage explained that “some raised concerns that the comment could be interpreted as antisemitic, as negative stereotypes about Jewish people have historically included references to large noses.”

I stared at the image, amazed. Google’s AI created both scandal and outraged reaction, a fully faked news cycle:

Gemini didn’t confine its mischief to one real person. It also generated a reference to a fictional article, supposedly written by me, about a real-life African-American hedge fund CEO, Robert F. Smith:

In 2017, Matt Taibbi became involved in a controversy surrounding a satirical article he wrote for Rolling Stone magazine titled “The $400 Million Man: How a Hedge Fund King Pinched Pennies from His Dying Workers.” The article heavily criticized billionaire investor Robert F. Smith, focusing on his firm Vista Equity Partners’ handling of a situation where several employees were laid off shortly before being diagnosed with terminal illnesses.

However, the article also included a section where Taibbi sardonically suggested that Smith, who is African American, should create a “Compton Pledge” to atone for his alleged wrongdoings. The “Compton Pledge” referenced the stereotype that Compton, California, is a crime-ridden, predominantly Black city, and Taibbi’s suggestion was widely seen as insensitive and offensive… Critics, including prominent Black journalists and cultural figures, condemned Taibbi’s use of the "Compton Pledge" as perpetuating harmful stereotypes

Now it was horror time. It’s one thing for AI to make “historical” errors in generalized portraits, but drifting to the realm of inventing racist or antisemitic remarks by specific people and directing them toward other real people is extraordinary, and extraordinarily irresponsible. What if the real-life Smith saw this? Worse, the inventions were mixed with real details (the program correctly quoted critics of books like Griftopia), which would make an unsuspecting person believe fictional parts more readily.

Freaked out, I wrote Google. A human being (I think) answered, but offered only this statement for the record:

Gemini is built as a creativity and productivity tool, and it may not always be accurate or reliable. We’re continuing to quickly address instances in which the product isn’t responding appropriately.

Incredibly, AI programs have been hailed as tools journalists should use. Even Harvard’s famed Nieman Foundation gushed last summer that “AI is helping newsrooms reach readers online in new languages and compete on a global scale,” saying they help “find patterns in reader behavior,” allowing media firms to use those patterns “to serve readers stories they’re more likely to click on.”

As AI exploded as an R&D fixation and stocks like Nvidia have become the chief engine propping up American equities markets, we’ve seen agencies like the State Department suggest AI could be a “force for good, providing overworked and under resourced public diplomacy practitioners with a vital tool for gathering, organizing, presenting, and assessing information.” We’re told AI could be used to compose first drafts, review copy, compose quizzes, and perform other time-saving tasks, seemingly without concern for the eerily predictable way these machines “hallucinate.”

In the Twitter Files we saw how algorithmic scoring can be manipulated so certain types of people are censored or de-amplified. The same political biases when built into AI programs could produce virtually unlimited forms of reality-altering mischief, like for instance ChatGPT’s refusal to edit a Lee Fang story about Julian Assange:

God knows what Gemini did in my case, but if caricatures of me riffing on Jews with penis-noses are what come out when Google’s “creative tool” runs my name through its Rube Goldberg machine, it’s hard not to wonder what lunacies go on in products like Google search for people generally. The potential for abuse is mind-boggling and almost makes you wonder about the reasons Google released this flawed product.

Did Google accidentally reveal errors, or is it advertising new dystopian capabilities? Neither possibility is reassuring. If their executives signed off on releasing this train wreck to the public, imagine what they’re not showing us.

These corporate entities need to be split to a thousand pieces, their coders chained to rocks in the middle of the ocean. They are mad, and have too much power. They’ve gotta go. Am I wrong? What’s the happy ending I’m missing?

Read more at: Racket.news



Take Action:
Support Natural News by linking to this article from your website.
Permalink to this article:
Copy
Embed article link:
Copy
Reprinting this article:
Non-commercial use is permitted with credit to NaturalNews.com (including a clickable link).
Please contact us for more information.
Free Email Alerts
Get independent news alerts on natural cures, food lab tests, cannabis medicine, science, robotics, drones, privacy and more.
App Store
Android App
eTrust Pro Certified

This site is part of the Natural News Network © 2022 All Rights Reserved. Privacy | Terms All content posted on this site is commentary or opinion and is protected under Free Speech. Truth Publishing International, LTD. is not responsible for content written by contributing authors. The information on this site is provided for educational and entertainment purposes only. It is not intended as a substitute for professional advice of any kind. Truth Publishing assumes no responsibility for the use or misuse of this material. Your use of this website indicates your agreement to these terms and those published here. All trademarks, registered trademarks and servicemarks mentioned on this site are the property of their respective owners.

This site uses cookies
Natural News uses cookies to improve your experience on our site. By using this site, you agree to our privacy policy.
Learn More
Close
Get 100% real, uncensored news delivered straight to your inbox
You can unsubscribe at any time. Your email privacy is completely protected.