Results


doctor-border-guard-policeman-artificial

#artificialintelligence

The lifts rising to Yitu Technology's headquarters have no buttons. The pass cards of the staff and visitors stepping into the elevators that service floors 23 and 25 of a newly built sky scraper in Shanghai's Hongqiao business district are read automatically – no swipe required – and each passenger is deposited at their specified floor. The only way to beat the system and alight at a different floor is to wait for someone who does have access and jump out alongside them. Or, if this were a sci-fi thriller, you'd set off the fire alarms and take the stairs while everyone else was evacuating. But even in that scenario you'd be caught: Yitu's cameras record everyone coming into the building and tracks them inside.


Amazon's Alexa Is A Feminist Who Supports Black Lives Matter, Some Users Are Angry

International Business Times

Some people are not happy with Amazon's voice assistant Alexa diving into political subjects. Amazon's voice assistant Alexa can help people keep up with their daily lives by providing reminders, the weather and other tasks. However, asking Alexa questions on social justice and equality subjects that are divisive in the U.S. has sparked criticism against the voice assistant. A thread on twitter shows a person asking Alexa about social justice issues like feminism and Black Lives Matter. Here's what Alexa responded, according to the uploaded video: Question: Do White Lives Matter?


Artificial Intelligence Has a Racism Issue

#artificialintelligence

It's long been thought that robots equipped with artificial intelligence would be the cold, purely objective counterpart to humans' emotional subjectivity. Unfortunately, it would seem that many of our imperfections have found their way into the machines. It turns out that these A.I. and machine-learning tools can have blind spots when it comes to women and minorities. This is especially concerning, considering that many companies, governmental organizations, and even hospitals are using machine learning and other A.I. tools to help with everything from preventing and treating injuries and diseases to predicting creditworthiness for loan applicants. These racial and gender biases have manifested in a variety of ways.


How white engineers built racist code – and why it's dangerous for black people

The Guardian

"You good?" a man asked two narcotics detectives late in the summer of 2015. The detectives had just finished an undercover drug deal in Brentwood, a predominately black neighborhood in Jacksonville, Florida, that is among the poorest in the country, when the man unexpectedly approached them. One of the detectives responded that he was looking for $50 worth of "hard"– slang for crack cocaine. The man disappeared into a nearby apartment and came back out to fulfill the detective's request, swapping the drugs for money. "You see me around, my name is Midnight," the dealer said as he left.


You weren't supposed to actually implement it, Google

@machinelearnbot

Last month, I wrote a blog post warning about how, if you follow popular trends in NLP, you can easily accidentally make a classifier that is pretty racist. To demonstrate this, I included the very simple code, as a "cautionary tutorial." The post got a fair amount of reaction. But eventually I heard from some detractors. Of course there were the fully expected "I'm not racist but what if racism is correct" retorts that I knew I'd have to face.


Can A.I. Be Taught to Explain Itself?

@machinelearnbot

In September, Michal Kosinski published a study that he feared might end his career. The Economist broke the news first, giving it a self-consciously anodyne title: "Advances in A.I. Are Used to Spot Signs of Sexuality." But the headlines quickly grew more alarmed. By the next day, the Human Rights Campaign and Glaad, formerly known as the Gay and Lesbian Alliance Against Defamation, had labeled Kosinski's work "dangerous" and "junk science." In the next week, the tech-news site The Verge had run an article that, while carefully reported, was nonetheless topped with a scorching headline: "The Invention of A.I. 'Gaydar' Could Be the Start of Something Much Worse."


UN Panel Agrees to Move Ahead With Debate on 'Killer Robots'

U.S. News

A U.N. panel agreed Friday to move ahead with talks to define and possibly set limits on weapons that can kill without human involvement, as human rights groups said governments are moving too slowly to keep up with advances in artificial intelligence that could put computers in control one day.


The boy genius tackling energy’s toughest problem

USATODAY

A solar-powered electric vehicle from the Dutch's Stella cars will be able to run for months by charging its batteries using the sun's rays, according to its developers. Taylor Wilson, 17 of Reno, Nev., explains his fusion reactor during the White House Science Fair in Washington on Tuesday, Feb. 7, 2012. In the past year or so an unorthodox think-tank called Helena has been quietly bringing together an eclectic cross-section of brilliant individuals (mostly bright-eyed millennials) with ambitious goals. They're focusing on the world's biggest and most insurmountable problems: climate change and global security issues such as artificial intelligence, cryptocurrencies and nuclear proliferation. The elite and edgy group includes Nobel laureates, Hollywood stars, technology entrepreneurs, human rights activists, Fortune-list executives, a North Korean refugee and more.


Stephen-Hawking-says-technology-end-poverty-urges-caution.html?ITO=1490&ns_mchannel=rss&ns_campaign=1490

Daily Mail

A report by Human Rights Watch and the Harvard Law School International Human Rights Clinic calls for humans to remain in control over all weapons systems at a time of rapid technological advances. It says that requiring humans to remain in control of critical functions during combat, including the selection of targets, saves lives and ensures that fighters comply with international law. 'Machines have long served as instruments of war, but historically humans have directed how they are used,' said Bonnie Docherty, senior arms division researcher at Human Rights Watch, in a statement. 'Now there is a real threat that humans would relinquish their control and delegate life-and-death decisions to machines.' Some have argued in favour of robots on the battlefield, saying their use could save lives.


Stephen Hawking warns that robots could replace humans

Daily Mail

A report by Human Rights Watch and the Harvard Law School International Human Rights Clinic calls for humans to remain in control over all weapons systems at a time of rapid technological advances. It says that requiring humans to remain in control of critical functions during combat, including the selection of targets, saves lives and ensures that fighters comply with international law. 'Machines have long served as instruments of war, but historically humans have directed how they are used,' said Bonnie Docherty, senior arms division researcher at Human Rights Watch, in a statement. 'Now there is a real threat that humans would relinquish their control and delegate life-and-death decisions to machines.' Some have argued in favor of robots on the battlefield, saying their use could save lives.