Australian government report warns against the potential THREATS of AI
01/30/2024 // Laura Harris // Views

A new report by the Australian government warns the people against the potential threats of artificial intelligence (AI) and urges them to mitigate these risks.

The report released by the Australian Signals Directorate (ASD) and produced by the Australian Cyber Security Centre (ACSC) and partners stated that the government, academia and industry play an important role in managing AI technology through effective regulation and governance. It cited several potential threats that should be addressed to ensure secure AI engagement, despite the technology's capability to enhance efficiency and reduce costs.

For instance, it warned of "data poisoning" – which involves manipulating the training data of AI to teach the model incorrect patterns. This manipulation can result in the misclassification of data or the production of biased, inaccurate or malicious outputs. The impact of data poisoning could negatively affect any organizational function reliant on the integrity of AI system outputs.

"An AI model's training data could be manipulated by inserting new data or modifying existing data, or the training data could be taken from a source that was poisoned to begin with. Data poisoning may also occur in the model’s fine-tuning process," the report stated. (Related: SMASHING the AI threat matrix – How human resistance defeats Skynet.)

Manipulation attacks, such as prompt injection and implementation of malicious instructions or hidden commands into an AI system, "can evade content filters and other safeguards restricting the AI system’s functionality."

Generative AI systems like chatbots can result in false information due to processing incomplete or incorrect patterns. Organizations relying on the accuracy of generative AI outputs need to implement appropriate mitigations to avoid negative impacts. Organizations are also warned about the information shared with generative AI systems, as it can influence outputs and pose privacy and intellectual property concerns.

The report underscored the risk of model-stealing attacks, where malicious actors use AI outputs to create replicas, allowing competitors to benefit from the development costs without sharing in the initial investment.

Developers usually set aside the consequences of rapid AI development

Entrepreneur Ian Hogarth, a significant investor in the AI sector, warned the public in an opinion piece about the reckless development of AI that could potentially lead to the creation of "a God-like AI capable of destroying humanity."

Hogarth highlighted the imminent risk as AI systems edge closer to achieving artificial general intelligence (AGI), a state where machines can comprehend and learn anything humans can. The current AI technology has not reached this level yet, but the rapid growth of the industry aims to achieve AGI. However, achieving this goal comes with very high and dangerous stakes.

"Most experts view the arrival of AGI as a historical and technological turning point, akin to the splitting of the atom or the invention of the printing press. The important question has always been how far away in the future this development might be," Hogarth wrote.

He claimed that AI researchers are not sufficiently focusing on the potential dangers of AGI or communicating these risks to the public. Hogarth recounted a conversation with a researcher who, while grappling with the responsibility, seemed swept along by the rapid progress in the field.

The investor acknowledged his own role in AI development, heavily bankrolling over 50 startups dedicated to AI and machine learning. He emphasized the lack of oversight and understanding as companies race toward AGI without a clear strategy for ensuring its safe implementation.

Referring to AGI as "God-like AI," Hogarth envisioned a superintelligent computer capable of autonomous learning and development, understanding its environment without supervision, and potentially transforming the world with unforeseeable consequences.

This, in turn, along with the report of the ASD, claims that the discussion of the threats aims to help AI stakeholders engage with the technology securely and not stop the public from using AI.

Follow for more stories about AI and its dangers.

Watch this video from InfoWars discussing how new AI systems are being programmed to end all of humanity.

This video is from the InfoWars channel on

More related stories:

AI is currently the greatest threat to humanity, warns investigative reporter Millie Weaver.

Entrepreneur Ian Hogarth warns reckless development of AI could lead to the destruction of humanity.

Save My Freedom with Michele Swinick: Use of AI will lead to the END OF HUMANITY, Jeff Dornik warns – Brighteon.TV.

Sources include:

Take Action:
Support Natural News by linking to this article from your website.
Permalink to this article:
Embed article link:
Reprinting this article:
Non-commercial use is permitted with credit to (including a clickable link).
Please contact us for more information.
Free Email Alerts
Get independent news alerts on natural cures, food lab tests, cannabis medicine, science, robotics, drones, privacy and more.
App Store
Android App
eTrust Pro Certified

This site is part of the Natural News Network © 2022 All Rights Reserved. Privacy | Terms All content posted on this site is commentary or opinion and is protected under Free Speech. Truth Publishing International, LTD. is not responsible for content written by contributing authors. The information on this site is provided for educational and entertainment purposes only. It is not intended as a substitute for professional advice of any kind. Truth Publishing assumes no responsibility for the use or misuse of this material. Your use of this website indicates your agreement to these terms and those published here. All trademarks, registered trademarks and servicemarks mentioned on this site are the property of their respective owners.

This site uses cookies
Natural News uses cookies to improve your experience on our site. By using this site, you agree to our privacy policy.
Learn More
Get 100% real, uncensored news delivered straight to your inbox
You can unsubscribe at any time. Your email privacy is completely protected.