Facial recognition tech is becoming more sophisticated, with some firms claiming it can even read our emotions and detect suspicious behaviour. But what implications does this have for privacy and civil liberties? Facial recognition tech has been around for decades, but it has been progressing in leaps and bounds in recent years due to advances in computing vision and artificial intelligence (AI), tech experts say. It is now being used to identify people at borders, unlock smart phones, spot criminals, and authenticate banking transactions. But some tech firms are claiming it can also assess our emotional state.
Microsoft has called for facial recognition technology to be regulated by government, with for laws governing its acceptable uses. In a blog post on the company's website on Friday, Microsoft president Brad Smith called for a congressional bipartisan "expert commission" to look into regulating the technology in the US. "It seems especially important to pursue thoughtful government regulation of facial recognition technology, given its broad societal ramifications and potential for abuse," he wrote. "Without a thoughtful approach, public authorities may rely on flawed or biased technological approaches to decide who to track, investigate or even arrest for a crime." Microsoft is the first big tech company to raise serious alarms about an increasingly sought-after technology for recognising a person's face from a photo or through a camera.
All tools can be used for good or ill. Even a broom can be used to sweep the floor or hit someone over the head. The more powerful the tool, the greater the benefit or damage it can cause. The last few months have brought this into stark relief when it comes to computer-assisted facial recognition – the ability of a computer to recognize people's faces from a photo or through a camera. This technology can catalog your photos, help reunite families or potentially be misused and abused by private companies and public authorities alike. Facial recognition technology raises issues that go to the heart of fundamental human rights protections like privacy and freedom of expression. These issues heighten responsibility for tech companies that create these products.
Click to learn more about author Cathy Nolan. Americans have long been divided in their views about the trade-off between security needs and personal privacy including data privacy. Much of the attention has been on how government collects data or uses surveillance, though there are also significant concerns about how businesses use data. When a terrorist attack happens, people tend to favor more surveillance by the government but at the same time some people are becoming increasingly concerned about their privacy and protecting their civil liberties. New information about the extent that digital technologies have captured and sold a wide array of data about individual's habits, preferences, prejudices, and personalities have alerted people to the amount of data they have provided, either willingly or unwittingly, to data brokers.
A recent, sprawling Wired feature outlined the results of its analysis on toxicity in online commenters across the United States. Unsurprisingly, it was like catnip for everyone who's ever heard the phrase "don't read the comments." According to The Great Tech Panic: Trolls Across America, Vermont has the most toxic online commenters, whereas Sharpsburg, Georgia "is the least toxic city in the US." The underlying API used to determine "toxicity" scores phrases like "I am a gay black woman" as 87 percent toxicity, and phrases like "I am a man" as the least toxic. The API, called Perspective, is made by Google's Alphabet within its Jigsaw incubator.
In the days before white supremacists descended on Charlottesville, Bumble had already been in the process of strengthening its anti-racism efforts, partly in response to an attack the Daily Stormer had waged on the company, encouraging its readers to harass the staff of Bumble in order to protest the company's public support of women's empowerment. Bumble bans any user who disrespects their customer service team, figuring that a guy who harasses women who work for Bumble would probably harass women who use Bumble. After the neo-Nazi attack, Bumble contacted the Anti-Defamation League for help identifying hate symbols and rooting out users who include them in their Bumble profiles. Now, the employees who respond to user reports have the ADL's glossary of hate symbols as a guide to telltale signs of hate-group membership, and any profile with language from the glossary will get flagged as potentially problematic. The platform has also added the Confederate flag to its list of prohibited images.
The Anti-Defamation League hasn't been shy about its condemnation of Breitbart News, an outlet it calls the "premiere website" for the "loose-knit group of white nationalists and unabashed anti-Semites and racists" it claims constitutes the so-called "alt-right" movement. So it came as a bit of a shock recently when the Jewish rights group discovered that it happened to number among the site's advertisers. The ADL wasn't the only one; As Breitbart chairman Steve Bannon's new White House gig brought renewed media attention to the agitative far-right site's less savory tendencies, Kellogg, Warby Parker, U.S. Bank and several other major brands also found that they had been unwittingly supporting it with their ad dollars. "We regularly work with our media buying partners to ensure our ads do not appear on sites that aren't aligned with our values as a company," a Kellogg spokesperson said at the time. "As you can imagine, there is a very large volume of websites, so occasionally something is inadvertently missed."
DALLAS/HOUSTON/LOS ANGELES/WASHINGTON – When Dallas police improvised a bomb-carrying robot to kill a sniper, they also kicked off an ethical debate about technology's use as a crime-fighting weapon. In what appears to be an unprecedented tactic, police rigged a bomb-disposal robot to kill an armed suspect in the fatal shootings of five officers in Dallas. While there doesn't appear to be any hard data on the subject, security experts and law enforcement officials said they couldn't recall another time when police deployed a robot with lethal intent. The strategy opens a new chapter in the escalating use of remote-controlled and semi-autonomous devices to fight crime and protect lives. It also raises new questions over when it's appropriate to dispatch a robot to kill dangerous suspects instead of continuing to negotiate their surrender.
Police used a "bomb robot" early Friday to kill a gunman who fatally shot five police officers and wounded seven others in downtown Dallas, saying he "wanted to kill white people," officials said. The end to the standoff came several hours after a suspect began firing during a protest over recent police shootings in Minnesota and Louisiana and then holed up in a garage, officials said. "We cornered one suspect and we tried to negotiate for several hours," Dallas Police Chief David Brown said during a Friday morning news conference, but "negotiations broke down" and turned into "an exchange of gunfire with the suspect." The suspect was identified as Micah X. Johnson, 25, a former Army reservist and resident of the Dallas area, two U.S. law enforcement officials said. Johnson had no known criminal history or ties to terror groups, the official said, and has relatives in Mesquite, Texas, which is just east of Dallas.The official said federal agents were assisting Dallas authorities in the investigation.
When Microsoft unleashed Tay, an artificially intelligent chatbot with the personality of a flippant 19-year-old, the company hoped that people would interact with her on social platforms like Twitter, Kik, and GroupMe. The idea was that by chatting with her you'd help her learn, while having some fun and aiding her creators in their AI research. The good news: people did talk to Tay. She quickly racked up over 50,000 Twitter followers who could send her direct messages or tweet at her, and she's sent out over 96,000 tweets so far. The bad news: in the short time since she was released on Wednesday, some of Tay's new friends figured out how to get her to say some really awful, racist things.