END TIME BIBLE PROPHECIES HAPPENING NOW & THE ROAD TO CHRIST (YAHSHUA)
src="http://ra.revolvermaps.com/0/0/1.js?i=0s5awg5quen&m=7&s=320&c=e63100" async="async"></script>

Join the forum, it's quick and easy

END TIME BIBLE PROPHECIES HAPPENING NOW & THE ROAD TO CHRIST (YAHSHUA)
src="http://ra.revolvermaps.com/0/0/1.js?i=0s5awg5quen&m=7&s=320&c=e63100" async="async"></script>
END TIME BIBLE PROPHECIES HAPPENING NOW & THE ROAD TO CHRIST (YAHSHUA)
Would you like to react to this message? Create an account in a few clicks or log in to continue.
Search
 
 

Display results as :
 


Rechercher Advanced Search

May 2024
SunMonTueWedThuFriSat
   1234
567891011
12131415161718
19202122232425
262728293031 

Calendar Calendar

Latest Topice
Latest Topics
Topic
History
Written by
{classical_row.recent_topic_row.L_TITLE}
{ON} {classical_row.recent_topic_row.S_POSTTIME}
{classical_row.recent_topic_row.switch_poster.S_POSTER} {classical_row.recent_topic_row.switch_poster_guest.S_POSTER} {classical_row.recent_topic_row.switch_poster.S_POSTER}

Latest Topice
Latest Topics
Topic
History
Written by
{classical_row.recent_topic_row.L_TITLE}
{ON} {classical_row.recent_topic_row.S_POSTTIME}
{classical_row.recent_topic_row.switch_poster.S_POSTER} {classical_row.recent_topic_row.switch_poster_guest.S_POSTER} {classical_row.recent_topic_row.switch_poster.S_POSTER}

Visitors
Flag Counter

Expert warns that AI brains are not infallible and have even been found to “make bad decisions” that can harm humans

Go down

Expert warns that AI brains are not infallible and have even been found to “make bad decisions” that can harm humans Empty Expert warns that AI brains are not infallible and have even been found to “make bad decisions” that can harm humans

Post by Harry Mon Oct 23, 2017 9:45 am


Expert warns that AI brains are not infallible and have even been found to “make bad decisions” that can harm humans

Sunday, October 22, 2017 by: Isabelle Z.

Tags: artificial intelligence, AutoML, computers, computing, future tech, machine learning, modern technology, robotics, tech dependence

1,440Views

Image: Expert warns that AI brains are not infallible and have even been found to “make bad decisions” that can harm humans

(Natural News) We’re learning more every day about the price to be paid for all the conveniences modern technology brings us, and while some of the potential pitfalls of artificial intelligence (AI) are rather obvious, others are a bit more insidious.

New York University Research Professor Kate Crawford and a group of colleagues are so concerned about the social implications of AI that they’ve established The AI Now Institute to study it.

In a recent piece for the Wall Street Journal, Crawford expressed her concerns about the way that AI systems base their learning on social data reflecting human history, which is full of prejudices and biases. Making matters worse is the fact that algorithms can unwittingly boost such biases, which is something that has already been demonstrated in studies.

In some of its applications, the ramifications could be significant. She wrote: “It’s a minor issue when it comes to targeted Instagram advertising but a far more serious one if AI is deciding who gets a job, what political news you read or who gets out of jail.”

For example, last year, Pro Publica reported that a widely-used police algorithm was skewed against African Americans. Racial disparities in a formula used to determine a person’s risk of re-offending made the system more likely to flag African American defendants as potential future criminals while incorrectly identifying white defendants as having a lower risk.

When the AI was tasked with analyzing a group of 7,000 people who were arrested in Florida during 2013 and 2014 and determining who was likely to go on to re-offend within two years, its record was shockingly poor; only one in five of those it predicted would commit violent crimes again actually did so.

This has prompted worries that we could be set for a “toxic” future in which machines make poor decisions in place of humans if nothing is done to prevent this from happening now.
Are AI systems only as good as the humans programming them?

AI systems use neural networks that attempt to simulate the way the human brain works to learn new things. They can be trained to find patterns in speech, text and images. When the information they are given to learn patterns from contains human flaws and biases, such prejudices can become exaggerated as they are given undue significance in decision making.

It’s a legitimate concern at a time when Google’s Machine Learning AI has just managed to replicate itself for the first time and machines move closer to being able to create complex AI without any input from humans. Google’s AutoML, an AI that was designed to help the company create new AIs, has now outdone human engineers by creating a machine-learning software that is more powerful and efficient than anything made by humans.
Helpful today, harmful tomorrow?

Google’s AI has already learned to become aggressive. How far off could we be from AI technology that uses its power for evil rather than good? A group of experts expressed concern at the International Joint Conference on Artificial Intelligence in Argentina that if AI technology continues developing unabated, autonomous weapons that can operate without input from humans could eventually carry out ethnic cleansing campaigns, mass genocide, and other atrocities. They said it’s something that was feasible in years rather than decades.

AI helps us in many ways – it’s an important part of many fraud detection and security measures, for example – but it’s entirely possible that something that was designed to help humans could end up causing us a great degree of harm.

Sources include:

DailyMail.co.uk

DailyMail.co.uk

NaturalNews.com

NewsTarget.com

Harry
Admin
Admin

Posts : 32157
Points : 96946
Join date : 2015-05-02
Age : 95
Location : United States

Back to top Go down

Back to top


 
Permissions in this forum:
You cannot reply to topics in this forum