Results


Facial recognition helps mom and dad see kids' camp photos, raises privacy concerns for some

USATODAY

A photo from a summer camp posted to the camp's website so parents can view them. Venture capital-backed Waldo Photos has been selling the service to identify specific children in the flood of photos provided daily to parents by many sleep-away camps. Camps working with the Austin, Texas-based company give parents a private code to sign up. When the camp uploads photos taken during activities to its website, Waldo's facial recognition software scans for matches in the parent-provided headshots. Once it finds a match, the Waldo system (as in "Where's Waldo?") then automatically texts the photos to the child's parents.


Safeguarding human rights in the era of artificial intelligence

#artificialintelligence

The use of artificial intelligence in our everyday lives is on the increase, and it now covers many fields of activity. Something as seemingly banal as avoiding a traffic jam through the use of a smart navigation system, or receiving targeted offers from a trusted retailer is the result of big data analysis that AI systems may use. While these particular examples have obvious benefits, the ethical and legal implications of the data science behind them often go unnoticed by the public at large. Artificial intelligence, and in particular its subfields of machine learning and deep learning, may only be neutral in appearance, if at all. Underneath the surface, it can become extremely personal.


Safeguarding human rights in the era of artificial intelligence

#artificialintelligence

The use of artificial intelligence in our everyday lives is on the increase, and it now covers many fields of activity. Something as seemingly banal as avoiding a traffic jam through the use of a smart navigation system, or receiving targeted offers from a trusted retailer is the result of big data analysis that AI systems may use. While these particular examples have obvious benefits, the ethical and legal implications of the data science behind them often go unnoticed by the public at large. Artificial intelligence, and in particular its subfields of machine learning and deep learning, may only be neutral in appearance, if at all. Underneath the surface, it can become extremely personal.


Safeguarding human rights in the era of artificial intelligence

#artificialintelligence

"The use of artificial intelligence in our everyday lives is on the increase, and it now covers many fields of activity. Something as seemingly banal as avoiding a traffic jam through the use of a smart navigation system, or receiving targeted offers from a trusted retailer is the result of big data analysis that AI systems may use", says Dunja Mijatović, Council of Europe Commissioner for Human Rights, in her Human Rights Comment published today. "While these particular examples have obvious benefits, the ethical and legal implications of the data science behind them often go unnoticed by the public at large."


Can A Machine Be Racist? – Towards Data Science

#artificialintelligence

Artificial Intelligence has become a household word. It has also become a manipulator of all households. The unchecked explosion in AI across all businesses and business models has been a phenomenal driver of growth, but it raises questions that need to be answered.


ai-research-is-in-desperate-need-of-an-ethical-watchdog

WIRED

Stanford's review board approved Kosinski and Wang's study. "The vast, vast, vast majority of what we call'big data' research does not fall under the purview of federal regulations," says Metcalf. Take a recent example: Last month, researchers affiliated with Stony Brook University and several major internet companies released a free app, a machine learning algorithm that guesses ethnicity and nationality from a name to about 80 percent accuracy. The group also went through an ethics review at the company that provided training list of names, although Metcalf says that an evaluation at a private company is the "weakest level of review that they could do."


Stanford professor says face-reading AI will detect IQ

Daily Mail

A Stanford University expert has claimed that computer programmes will soon be able to guess your political leaning and IQ based on photos of your face. Dr Michal Kosinski went viral last week after publishing research suggesting artificial intelligence (AI) can tell whether someone is straight or gay based on photos. Now the psychologist and data scientist has claimed that sexual orientation is one of many character traits the AI will be able to detect in the coming years. Stanford researcher Dr Michal Kosinski went viral last week after publishing research (pictured) suggesting AI can tell whether someone is straight or gay based on photos. AI-powered computer programmes can learn how to determine certain traits by being shown a number of faces in a process known as'training'.


Beyond science fiction: Artificial Intelligence and human rights

#artificialintelligence

"You are worse than a fool; you have no care for your species. For thousands of years men dreamed of pacts with demons. Only now are such things possible." When William Gibson wrote those words in his groundbreaking 1984, novel Neuromancer, artificial intelligence remained almost entirely within the realm of science fiction. Today, however, the convergence of complex algorithms, big data, and exponential increases in computational power has resulted in a world where AI raises significant ethical and human rights dilemmas, involving rights ranging from the right to privacy to due process.


Racist algorithms: how Big Data makes bias seem objective

#artificialintelligence

The Ford Foundation's Michael Brennan discusses the many studies showing how algorithms can magnify bias -- like the prevalence of police background check ads shown against searches for black names. What's worse is the way that machine learning magnifies these problems. If an employer only hires young applicants, a machine learning algorithm will learn to screen out all older applicants without anyone having to tell it to do so. Worst of all is that the use of algorithms to accomplish this discrimination provides a veneer of objective respectability to racism, sexism and other forms of discrimination. I recently attended a meeting about some preliminary research on "predictive policing," which uses these machine learning algorithms to allocate police resources to likely crime hotspots.


Emerging Ethical Concerns In the Age of Artificial Intelligence

#artificialintelligence

My husband and I have a running joke where we have our Amazon Echo "compete" with our iPhones to see who does a better (i.e., more human-like) job of interacting with us. While there's no clear winner, Siri seems to have the edge for casual conversation, but Alexa can sing. I've noticed something else, too. We don't usually thank Siri or Alexa the way we would a clerk at a supermarket or an employee at an information kiosk, even though they're providing us with identical services. They don't care if we thank them, because they don't have feelings.