Results


AI Research Is in Desperate Need of an Ethical Watchdog

#artificialintelligence

Stanford's review board approved Kosinski and Wang's study. "The vast, vast, vast majority of what we call'big data' research does not fall under the purview of federal regulations," says Metcalf. Take a recent example: Last month, researchers affiliated with Stony Brook University and several major internet companies released a free app, a machine learning algorithm that guesses ethnicity and nationality from a name to about 80 percent accuracy. The group also went through an ethics review at the company that provided training list of names, although Metcalf says that an evaluation at a private company is the "weakest level of review that they could do."


ai-research-is-in-desperate-need-of-an-ethical-watchdog

WIRED

Stanford's review board approved Kosinski and Wang's study. "The vast, vast, vast majority of what we call'big data' research does not fall under the purview of federal regulations," says Metcalf. Take a recent example: Last month, researchers affiliated with Stony Brook University and several major internet companies released a free app, a machine learning algorithm that guesses ethnicity and nationality from a name to about 80 percent accuracy. The group also went through an ethics review at the company that provided training list of names, although Metcalf says that an evaluation at a private company is the "weakest level of review that they could do."


Big Data will be biased, if we let it

@machinelearnbot

And since we're on the car insurance subject, minorities pay morefor car insurance than white people in similarly risky neighborhoods. If we don't put in place reliable, actionable, and accessible solutions to approach bias in data science, these type of usually unintentional discrimination will become more and more normal, opposing a society and institutions that on the human side are trying their best to evolve past bias, and move forward in history as a global community. Last but definitely not least, there's a specific bias and discrimination section, preventing organizations from using data which might promote bias such as race, gender, religious or political beliefs, health status, and more, to make automated decisions (except some verified exceptions). It's time to make that training broader, and teach all people involved about the ways their decisions while building tools may affect minorities, and accompany that with the relevant technical knowledge to prevent it from happening.


Beyond science fiction: Artificial Intelligence and human rights

#artificialintelligence

Today, however, the convergence of complex algorithms, big data, and exponential increases in computational power has resulted in a world where AI raises significant ethical and human rights dilemmas, involving rights ranging from the right to privacy to due process. Although less dramatic than military applications, the development of AI in the domestic sector also opens the door to significant human rights issues such as discrimination and systemic racism. Police forces across the country, for example, are increasingly turning to automated "predictive policing" systems that ingest large amounts of data on criminal activity, demographics, and geospatial patterns to produce maps of where algorithms predict crime is likely to occur. The development of AI in the domestic sector also opens the door to significant human rights issues such as discrimination and systemic racism.


Racist algorithms: how Big Data makes bias seem objective

#artificialintelligence

What's worse is the way that machine learning magnifies these problems. If an employer only hires young applicants, a machine learning algorithm will learn to screen out all older applicants without anyone having to tell it to do so. I recently attended a meeting about some preliminary research on "predictive policing," which uses these machine learning algorithms to allocate police resources to likely crime hotspots. With more engineers participating in policy debates and more policymakers who understand algorithms and big data, both government and civil society organizations will be stronger.


Emerging Ethical Concerns In the Age of Artificial Intelligence

#artificialintelligence

Science fiction novels have long delighted readers by grappling with futuristic challenges like the possibility of artificial intelligence so difficult to distinguish from human beings that people naturally ask, "should these sophisticated computer programs be considered human? Tech industry luminaries such as Tesla CEO Elon Musk have recently endorsed concepts like guaranteed minimum income or universal basic income. Bill Gates recently made headlines with a proposal to impose a "robot tax" -- essentially, a tax on automated solutions to account for the social costs of job displacement. Technology challenges our conception of human rights in other ways, as well.


IBM, Cloudera join RStudio to create R interface to Apache Spark

#artificialintelligence

R users can now use the popular dplyr package to tap into Apache Spark big data. The new sparklyr package is a native dplyr interface to Spark, according to RStudio. After installing the package, users can "interactively manipulate Spark data using both dplyr and SQL (via DBI), according to an RStudio blog post, as well as "filter and aggregate Spark data sets then bring them into R for analysis and visualization." An age discrimination lawsuit against Google has been approved as a '"collective action" by a federal... Election hacking has become a key topic during this year's presidential elections.


Your hiring algorithm might be racist - Technical.ly Philly

#artificialintelligence

Companies that use machine learning and big data in their hiring process use "training data," which is typically taken from prior and current employees. A statistical process then automatically discovers the traits that correlate to high performance among the training data and looks for those traits in the applicant pools. If you have zero employees who are women, people of color, or people with disabilities, it's impossible to evaluate their potential performance through machine learning, Barocas noted. Barocas noted how just several weeks ago, Amazon was blasted by Bloomberg for not offering its premium Amazon Prime service in certain parts of various cities.