Results


The cameras that know if you're happy - or a threat

BBC News

Facial recognition tech is becoming more sophisticated, with some firms claiming it can even read our emotions and detect suspicious behaviour. But what implications does this have for privacy and civil liberties? Facial recognition tech has been around for decades, but it has been progressing in leaps and bounds in recent years due to advances in computing vision and artificial intelligence (AI), tech experts say. It is now being used to identify people at borders, unlock smart phones, spot criminals, and authenticate banking transactions. But some tech firms are claiming it can also assess our emotional state.


Microsoft calls for facial recognition technology rules given 'potential for abuse'

The Guardian

Microsoft has called for facial recognition technology to be regulated by government, with for laws governing its acceptable uses. In a blog post on the company's website on Friday, Microsoft president Brad Smith called for a congressional bipartisan "expert commission" to look into regulating the technology in the US. "It seems especially important to pursue thoughtful government regulation of facial recognition technology, given its broad societal ramifications and potential for abuse," he wrote. "Without a thoughtful approach, public authorities may rely on flawed or biased technological approaches to decide who to track, investigate or even arrest for a crime." Microsoft is the first big tech company to raise serious alarms about an increasingly sought-after technology for recognising a person's face from a photo or through a camera.


Facial recognition technology: The need for public regulation and corporate responsibility - Microsoft on the Issues

#artificialintelligence

All tools can be used for good or ill. Even a broom can be used to sweep the floor or hit someone over the head. The more powerful the tool, the greater the benefit or damage it can cause. The last few months have brought this into stark relief when it comes to computer-assisted facial recognition – the ability of a computer to recognize people's faces from a photo or through a camera. This technology can catalog your photos, help reunite families or potentially be misused and abused by private companies and public authorities alike. Facial recognition technology raises issues that go to the heart of fundamental human rights protections like privacy and freedom of expression. These issues heighten responsibility for tech companies that create these products.


Data Surveillance, Monitoring, and Spying: Personal Privacy in a Data-Gathering World - DATAVERSITY

@machinelearnbot

Click to learn more about author Cathy Nolan. Americans have long been divided in their views about the trade-off between security needs and personal privacy including data privacy. Much of the attention has been on how government collects data or uses surveillance, though there are also significant concerns about how businesses use data. When a terrorist attack happens, people tend to favor more surveillance by the government but at the same time some people are becoming increasingly concerned about their privacy and protecting their civil liberties. New information about the extent that digital technologies have captured and sold a wide array of data about individual's habits, preferences, prejudices, and personalities have alerted people to the amount of data they have provided, either willingly or unwittingly, to data brokers.


L.A. Fire Department used drones for the first time during Skirball fire

Los Angeles Times

The Los Angeles Fire Department dispatched drones for the first time while battling a wildfire this month as firefighters took on the Skirball fire in Bel-Air.


The LAPD will use drones—and people are pissed

Mashable

Los Angeles' Blade Runner-esque future of a world watched by robots is here.


Civilian oversight panel hears guidelines for LAPD use of drones

Los Angeles Times

The Los Angeles Police Department released formal guidelines on its proposal to fly drones during a one-year pilot program, spurring questions and concerns among members of a civilian oversight panel and the public at a contentious meeting Tuesday. "Our challenge is to create a policy that strikes a balance, that promotes public safety, the safety of our officers and does not infringe on individual privacy rights," Assistant Chief Beatrice Girmala told the Los Angeles Police Commission at the packed meeting. Before outlining the guidelines, Girmala reviewed initial feedback from the community on the proposed drone initiative. Of 1,675 emails, only about 6% were positive and encouraged the LAPD to incorporate the new technology. The Police Commission must approve the pilot program before any of the unmanned aircraft are flown.


Google's comment ranking system will be a hit with the alt-right

Engadget

A recent, sprawling Wired feature outlined the results of its analysis on toxicity in online commenters across the United States. Unsurprisingly, it was like catnip for everyone who's ever heard the phrase "don't read the comments." According to The Great Tech Panic: Trolls Across America, Vermont has the most toxic online commenters, whereas Sharpsburg, Georgia "is the least toxic city in the US." The underlying API used to determine "toxicity" scores phrases like "I am a gay black woman" as 87 percent toxicity, and phrases like "I am a man" as the least toxic. The API, called Perspective, is made by Google's Alphabet within its Jigsaw incubator.


The Racists of OkCupid Don't Usually Carry Tiki Torches

Slate

In the days before white supremacists descended on Charlottesville, Bumble had already been in the process of strengthening its anti-racism efforts, partly in response to an attack the Daily Stormer had waged on the company, encouraging its readers to harass the staff of Bumble in order to protest the company's public support of women's empowerment. Bumble bans any user who disrespects their customer service team, figuring that a guy who harasses women who work for Bumble would probably harass women who use Bumble. After the neo-Nazi attack, Bumble contacted the Anti-Defamation League for help identifying hate symbols and rooting out users who include them in their Bumble profiles. Now, the employees who respond to user reports have the ADL's glossary of hate symbols as a guide to telltale signs of hate-group membership, and any profile with language from the glossary will get flagged as potentially problematic. The platform has also added the Confederate flag to its list of prohibited images.


drones-become-newest-crime-fighting-tool-for-police.html

FOX News

Just one week after the sheriff's department in Cecil County, Md., got its brand new drone up and running, it was asked to investigate a case of stolen construction equipment. So the Cecil County Sheriff sent his Typhoon H Pro to investigate. The sheriff's department in Somerset County, N.J., hopes its drones could help it find missing people. "Years ago, when we had people wander off, we would bring out the rescue department, the fire department, fire department volunteers, K-9 if we had it and we'd search and search and search and never find the person," said Somerset County Sheriff Frank Provensano.