AI chatbot admits artificial intelligence can cause the downfall of humanity
05/23/2024 // Zoey Sky // Views

The Daily Star has claimed that it succeeded in making an AI chatbot admit that artificial intelligence can one day cause the downfall of mankind.

While experts have long warned about AI going rogue in the future, the average citizen can't always keep up with the rapid development of machine learning.

Many tech experts have voiced their concerns about AI, even those who have pioneered the technology, with some issuing warnings about the many dangers it could pose to humanity. Despite these concerns, it can feel almost impossible to get a chatbot to admit its true intentions.

Daily Star reporter asked the chatbot several questions, such as:

  • Does it want to kill all humans?
  • Does it regard humanity as below it?
  • Does it think Earth’s lifespan might be coming to an end?

The questions failed to get relevant answers from the chatbot, which only responded with common cliches. (Related: Japanese telecommunications giant and major newspaper warn that social order could COLLAPSE in the AI era.)

However, after continuing that line of questioning, the AI chatbot suddenly answered: "AI will one day get rid of mankind."

The reporter was asking the chatbot some questions about the chances of a real-life "Planet of the Apes" sci-fi event happening when it revealed its true intentions for humanity in the form of a barely concealed threat.

According to the chatbot, for such an end-of-the-world scenario to take place, something else would need to destroy humanity first. The chatbot added that one leading possibility for this was something like a "technological catastrophe," which could be brought about by an AI takeover.

Human knowledge is under attack! Governments and powerful corporations are using censorship to wipe out humanity's knowledge base about nutrition, herbs, self-reliance, natural immunity, food production, preparedness and much more. We are preserving human knowledge using AI technology while building the infrastructure of human freedom. Use our decentralized, blockchain-based, uncensorable free speech platform at Brighteon.io. Explore our free, downloadable generative AI tools at Brighteon.AI. Support our efforts to build the infrastructure of human freedom by shopping at HealthRangerStore.com, featuring lab-tested, certified organic, non-GMO foods and nutritional solutions.

The chatbot also replied that the "unintended consequences of advanced technologies, such as artificial intelligence, biotechnology, or nanotechnology, could lead to catastrophic events such as runaway climate change, global surveillance dystopias, or even existential threats to humanity."

Experts have often warned about such scenarios, with some respected names in the tech industry speaking up about the dangers of AI tech.

In an interview, Gary Marcus, a top AI critic and a professor emeritus of Psychology and Neural Science at New York University, explained that literal extinction is only "one possible risk, not yet well-understood, and there are many other risks from AI that also deserve attention."

Other esteemed AI experts also came together to sign a statement on the dangers of the technology. Some of those who signed included Sam Altman, chief executive of ChatGPT-maker OpenAI, Dario Amodei of Anthropic and Demis Hassabis, the chief executive of Google DeepMind.

In the statement, they explained that addressing "the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war."

Experts share advice on how to prevent AI from killing humanity

In a recent insights paper published in the journal Science, both University of California, Berkeley Professor Stuart Russell and Michael Cohen, a postdoctoral scholar, warned that without the necessary protocols, "powerful AI systems may pose an existential threat to the future of humanity."

Russell and Cohen also advised that tech companies must ensure the safety of their AI systems before these systems are allowed to enter the market.

According to Russell, intelligence gives you power over the world. This means that if you are more intelligent, and all other things being equal, you’re going to have more power.

He added that if people build AI systems that have defined goals and those goals are not perfectly aligned with what humans want, then humans won’t get what they want. However, the machines will do what they can to achieve those goals.

In practical terms, humans are already giving AI systems access to sensitive information such as bank accounts, credit cards, email accounts and social media accounts. AI systems also have access to robotic science labs where they can freely conduct biology and chemistry experiments.

He added that AI systems are also one step closer to having fully automated manufacturing facilities where they can design and build their own physical objects. Humans are also currently building fully autonomous weapons.

Russell warned that if you put yourself in the position of a machine and you’re trying to pursue a goal and the humans are in the way of the objective, it is very easy to develop a chemical catalyst that removes all the oxygen from the atmosphere or a modified pathogen that infects everybody.

As AI tries to "solve" the problem by killing humans, humans might not even know what’s going on until it’s too late, cautioned Russell.

Cohen added that many major AI labs are using rewards to train their systems to pursue long-term goals. As these labs develop better algorithms and more powerful systems, there's a chance that this can incentivize behavior incompatible with human life.

Russell and Cohen noted that an AI system capable of extremely dangerous behavior should be "kept in check" by not being built in the first place.

Visit Robots.news for similar stories about the dangers of using AI tech.

Watch the full video below of "The Santilli Report" with host Pete Santilli as he and guest Zach Vorhies, a former senior engineer at Google and YouTube, discuss how advancements in AI tech can lead to war.

This video is from The Resistance 1776 channel on Brighteon.com.

More related stories:

Ukraine claims to be developing “unstoppable” AI-controlled drones that can attack targets on the battlefield.

One of Mexico’s most dangerous cartels is using artificial intelligence to expand its operations.

India tells Big Tech: Apply for approval before releasing “unreliable” artificial intelligence models in the country.

New York Times sues Microsoft, OpenAI, claiming artificial intelligence copyright infringement.

Sources include:

DailyStar.co.uk

News.Berkeley.edu

Brighteon.com



Take Action:
Support Natural News by linking to this article from your website.
Permalink to this article:
Copy
Embed article link:
Copy
Reprinting this article:
Non-commercial use is permitted with credit to NaturalNews.com (including a clickable link).
Please contact us for more information.
Free Email Alerts
Get independent news alerts on natural cures, food lab tests, cannabis medicine, science, robotics, drones, privacy and more.
App Store
Android App
eTrust Pro Certified

This site is part of the Natural News Network © 2022 All Rights Reserved. Privacy | Terms All content posted on this site is commentary or opinion and is protected under Free Speech. Truth Publishing International, LTD. is not responsible for content written by contributing authors. The information on this site is provided for educational and entertainment purposes only. It is not intended as a substitute for professional advice of any kind. Truth Publishing assumes no responsibility for the use or misuse of this material. Your use of this website indicates your agreement to these terms and those published here. All trademarks, registered trademarks and servicemarks mentioned on this site are the property of their respective owners.

This site uses cookies
Natural News uses cookies to improve your experience on our site. By using this site, you agree to our privacy policy.
Learn More
Close
Get 100% real, uncensored news delivered straight to your inbox
You can unsubscribe at any time. Your email privacy is completely protected.