Expert warns that AI brains are not infallible and have even been found to “make bad decisions” that can harm humans
10/22/2017 // Cassie B. // Views

We’re learning more every day about the price to be paid for all the conveniences modern technology brings us, and while some of the potential pitfalls of artificial intelligence (AI) are rather obvious, others are a bit more insidious.

New York University Research Professor Kate Crawford and a group of colleagues are so concerned about the social implications of AI that they’ve established The AI Now Institute to study it.

In a recent piece for the Wall Street Journal, Crawford expressed her concerns about the way that AI systems base their learning on social data reflecting human history, which is full of prejudices and biases. Making matters worse is the fact that algorithms can unwittingly boost such biases, which is something that has already been demonstrated in studies.

In some of its applications, the ramifications could be significant. She wrote: "It’s a minor issue when it comes to targeted Instagram advertising but a far more serious one if AI is deciding who gets a job, what political news you read or who gets out of jail.”

For example, last year, Pro Publica reported that a widely-used police algorithm was skewed against African Americans. Racial disparities in a formula used to determine a person’s risk of re-offending made the system more likely to flag African American defendants as potential future criminals while incorrectly identifying white defendants as having a lower risk.

When the AI was tasked with analyzing a group of 7,000 people who were arrested in Florida during 2013 and 2014 and determining who was likely to go on to re-offend within two years, its record was shockingly poor; only one in five of those it predicted would commit violent crimes again actually did so.

Brighteon.TV

This has prompted worries that we could be set for a “toxic” future in which machines make poor decisions in place of humans if nothing is done to prevent this from happening now.

Are AI systems only as good as the humans programming them?

AI systems use neural networks that attempt to simulate the way the human brain works to learn new things. They can be trained to find patterns in speech, text and images. When the information they are given to learn patterns from contains human flaws and biases, such prejudices can become exaggerated as they are given undue significance in decision making.

It’s a legitimate concern at a time when Google’s Machine Learning AI has just managed to replicate itself for the first time and machines move closer to being able to create complex AI without any input from humans. Google’s AutoML, an AI that was designed to help the company create new AIs, has now outdone human engineers by creating a machine-learning software that is more powerful and efficient than anything made by humans.

Helpful today, harmful tomorrow?

Google’s AI has already learned to become aggressive. How far off could we be from AI technology that uses its power for evil rather than good? A group of experts expressed concern at the International Joint Conference on Artificial Intelligence in Argentina that if AI technology continues developing unabated, autonomous weapons that can operate without input from humans could eventually carry out ethnic cleansing campaigns, mass genocide, and other atrocities. They said it’s something that was feasible in years rather than decades.

AI helps us in many ways – it’s an important part of many fraud detection and security measures, for example – but it’s entirely possible that something that was designed to help humans could end up causing us a great degree of harm.

Sources include:

DailyMail.co.uk

DailyMail.co.uk

NaturalNews.com

NewsTarget.com



Take Action:
Support Natural News by linking to this article from your website.
Permalink to this article:
Copy
Embed article link:
Copy
Reprinting this article:
Non-commercial use is permitted with credit to NaturalNews.com (including a clickable link).
Please contact us for more information.
Free Email Alerts
Get independent news alerts on natural cures, food lab tests, cannabis medicine, science, robotics, drones, privacy and more.
App Store
Android App
eTrust Pro Certified

This site is part of the Natural News Network © 2022 All Rights Reserved. Privacy | Terms All content posted on this site is commentary or opinion and is protected under Free Speech. Truth Publishing International, LTD. is not responsible for content written by contributing authors. The information on this site is provided for educational and entertainment purposes only. It is not intended as a substitute for professional advice of any kind. Truth Publishing assumes no responsibility for the use or misuse of this material. Your use of this website indicates your agreement to these terms and those published here. All trademarks, registered trademarks and servicemarks mentioned on this site are the property of their respective owners.

This site uses cookies
Natural News uses cookies to improve your experience on our site. By using this site, you agree to our privacy policy.
Learn More
Close
Get 100% real, uncensored news delivered straight to your inbox
You can unsubscribe at any time. Your email privacy is completely protected.