Results


The LAPD will use drones—and people are pissed

Mashable

Los Angeles' Blade Runner-esque future of a world watched by robots is here. On Tuesday, a civilian oversight panel gave the Los Angeles Police Department (LAPD) the OK to begin a year-long drone trial, primarily for reconnaissance in "tactical missions" conducted by SWAT. The decision came after a contentious meeting and protest by privacy advocates who oppose the use of drones by law enforcement. As the third largest police force in the nation behind New York and Chicago, the trial makes the LAPD the largest police force in the nation to use drones. The Chicago PD and New York PD confirmed in official statements to Mashable that neither police force deploys drones.


Should the LAPD test drones? Police Commission is set for final vote on controversial proposal

Los Angeles Times

In the two months since the Los Angeles Police Department revealed that it wants to try flying drones, the unmanned aircraft have been the source of an often heated back-and-forth. Advocates say the drones could help protect officers and others by using nonhuman eyes to collect crucial information during high-risk situations. Skeptics worry that use of the devices will steadily expand and include inappropriate -- or illegal -- surveillance. The LAPD's harshest critics want the drone program scrapped before it even takes off. On Tuesday, the civilian board that oversees the LAPD will vote on whether to allow the department to test drones during a one-year pilot program.


Civilian oversight panel hears guidelines for LAPD use of drones

Los Angeles Times

The Los Angeles Police Department released formal guidelines on its proposal to fly drones during a one-year pilot program, spurring questions and concerns among members of a civilian oversight panel and the public at a contentious meeting Tuesday. "Our challenge is to create a policy that strikes a balance, that promotes public safety, the safety of our officers and does not infringe on individual privacy rights," Assistant Chief Beatrice Girmala told the Los Angeles Police Commission at the packed meeting. Before outlining the guidelines, Girmala reviewed initial feedback from the community on the proposed drone initiative. An assistant chief, the police chief and two police commissioners would also be notified.


Robots are really good at learning things like racism and bigotry

#artificialintelligence

The real danger is in something called confirmation bias: when you come up with an answer first and then begin the process of only looking for information that supports that conclusion. Take the following example: if the number of women seeking truck driving jobs is less than men, on a job-seeking website, a pattern emerges. That pattern can be interpreted in many ways, but in truth it only means one specific factual thing: there are less women on that website looking for truck driver jobs than men. If you tell an AI to find evidence that triangles are good at being circles it probably will, that doesn't make it science.


AI robots are sexist and racist, experts warn

#artificialintelligence

He said the deep learning algorithms which drive AI software are "not transparent", making it difficult to to redress the problem. Currently approximately 9 per cent of the engineering workforce in the UK is female, with women making up only 20 per cent of those taking A Level physics. "We have a problem," Professor Sharkey told Today. Professor Sharkey said researchers at Boston University had demonstrated the inherent bias in AI algorithms by training a machine to analyse text collected from Google News.


Artificial intelligence may exceed human capacity

#artificialintelligence

At some point in time after singularity occurs, one of these self-aware machines will surely raise its claw (or virtual hand) and say; "hey, what about equal pay for equal work?" Only a century ago women were demanding the right to vote. Less than a century ago most white Americans didn't think African and Chinese Americans should be paid wages equal to whites. Many women are still fighting for equal pay for equal work, and Silicon Valley is a notoriously hostile workplace for women.


Healthcare Robots and the Right to Privacy

VideoLectures.NET

The latter is, due to its importance, protected not only by national health legislations, yet also by Article 8 of the European Convention on Human Rights, Right to privacy. As already mentioned, medicine is a profession that requires a certain level of maintenance of secrecy of confidential information and according to the previous Court's decisions the secrecy is even more important in cases that involves psychiatric records. The robots' involvement in medical treatments on one hand and easy access to the information they gain during the treatment on the other, bring into question the effectiveness of the provisions of Article 8 of the European Convention on Human Rights. Current legislations in countries around the world do not put much attention on this particular area, even though the modern robotic approaches have already been introduced and also very well accepted.


How not to create a racist, sexist robot

#artificialintelligence

Robots are picking up sexist and racist biases based on information used to program them predominantly coming from one homogenous group of people, suggests a new study from Princeton University and the U.K.'s University of Bath. But robots based on artificial intelligence (AI) and machine learning learn from historic human data and this data usually contain biases," Caliskan tells The Current's Anna Maria Tremonti. With the federal government recently announcing a $125 million investment in Canada's AI industry, Duhaime says now is the time to make sure funding goes towards pushing women forward in this field. "There is an understanding in the research community that we have to be careful and we have to have a plan with respect to ethical correctness of AI systems," she tells Tremonti.


Are robots racist and sexist? Study reveals AI picks up bias from human language

#artificialintelligence

The study, conducted by the Department of Psychology, University of Washington and published in the journal Science, shows how artificial intelligence that learns from human language is likely to pick up biases the same way as humans do. The biases range from linking women with stereotypical arts and humanities jobs, to associating European and American names with words that describe happiness, to matching pleasant words with white faces. For instance, it associated European American names with pleasant words such as "gift" or "happy", while African American names were more than often associated with unpleasant words. In another example, when comparing two identical CV's, there was a 50% more chance for a candidate's selection if he/she had an European American name compared to an African American.


Robots are racist and sexist. Just like the people who created them Laurie Penny

#artificialintelligence

If those patterns are used to make decisions that affect people's lives you end up with unacceptable discrimination." Robots have been racist and sexist for as long as the people who created them have been racist and sexist, because machines can work only from the information given to them, usually by the white, straight men who dominate the fields of technology and robotics. This doesn't mean robots are racist: it means people are racist, and we're raising robots to reflect our own prejudices. The encoded bigotries of machine learning systems give us an opportunity to see how this works in practice.