BBC News has launched a chat bot to help users learn about climate change in weekly conversations on Facebook Messenger. Subscribers will get an alert every Wednesday inviting them to explore topics from rising temperatures to new ways of tackling global warming. They can also ask questions which the bot will pass on for our human journalists to answer. You can sign up at the bottom of this page. We know that audiences are hungry for a better understanding of where the world stands on targets to control rising temperatures.
An Indian national in the US has pleaded guilty this week to destroying 59 computers at the College of St. Rose, in New York, using a weaponized USB thumb drive named "USB Killer" that he purchased online. The incident took place on February 14, according to court documents obtained by ZDNet, and the suspect, Vishwanath Akuthota, 27, filmed himself while destroying some of the computers. "I'm going to kill this guy," "it's dead," and "it's gone. Boom," Akuthota said on recordings obtained by the prosecution. The suspect destroyed 59 computers, but also seven computer monitors and computer-enhanced podiums that had open USB slots.
The New York Times has confirmed what some have long suspected: The Chinese government is using a "vast, secret system" of artificial intelligence and facial recognition technology to identify and track Uighurs--a Muslim minority, 1 million of whom are being held in detention camps in China's northwest Xinjiang province. This technology allows the government to extend its control of the Uighur population across the country. It may seem difficult to imagine a similar scenario in the U.S., but related technologies, built by Amazon, are already being used by U.S. law enforcement agencies to identify suspects in photos and video. And echoes of China's system can be heard in plans to deploy these technologies at the U.S.-Mexico border. A.I. systems also decide what information is presented to you on social media, which ads you see, and what prices you're offered for goods and services.
Microsoft has said it turned down a request from law enforcement in California to use its facial recognition technology in police body cameras and cars, reports Reuters. Speaking at an event at Stanford University, Microsoft president Brad Smith said the company was concerned that the technology would disproportionately affect women and minorities. Past research has shown that because facial recognition technology is trained primarily on white and male faces, it has higher error rates for other individuals. "Anytime they pulled anyone over, they wanted to run a face scan," said Smith of the unnamed law enforcement agency. "We said this technology is not your answer."
The ACLU and other groups urged Amazon to halt selling facial recognition technology to law enforcement departments. Lending tools charge higher interest rates to Hispanics and African Americans. Job hunting tools favor men. Negative emotions are more likely to be assigned to black men's faces than white men. Computer vision systems for self-driving cars have a harder time spotting pedestrians with darker skin tones.
Shares of Qualcomm soared 23% Tuesday – and remained up Wednesday – in the wake of a late-afternoon filing with the Securities and Exchange Commission, wherein the company announced that it reached a "multi-year" "global patent license agreement" and "chipset supply agreement" with Apple that settles the companies' yearslong intellectual property litigation and appears likely to work out to the benefit of both parties. In said filing with the SEC, Qualcomm states that as of April 1, 2019, it has directly licensed its relevant patents to Apple for at least the next six years, with the option to extend the agreement for an additional two years. Moreover, Qualcomm will supply chipsets to Apple for use in the latter's devices for several years at least. In exchange, Apple will make a one-time payment of an unspecified amount to Qualcomm, and pay continuing royalties to boot – also in an amount unspecified. Robotic advances: Mush! Watch a team of Boston Dynamics' SpotMini robot dogs pull a truck down the street Finally, "all worldwide litigation" between the two combatants "will be dismissed and withdrawn," including lawsuits against Apple's contract manufacturers.
The large number of mobile devices, the volume of apps on each phone, and the basic mobility of the devices all mean there is a lot of information being creating in the mobile world. Managing that large volume of information is impossible in a reasonable timeframe using older technologies. Machine learning (ML) is critical to mobile advertising in a number of ways. Advertising is complex even in the older channels of print and broadcast. Cable increased the need for better data to more finely segment the audiences.
To demonstrate how easy it is to track people without their knowledge, we collected public images of people who worked near Bryant Park (available on their employers' websites, for the most part) and ran one day of footage through Amazon's commercial facial recognition service. Our system detected 2,750 faces from a nine-hour period (not necessarily unique people, since a person could be captured in multiple frames). It returned several possible identifications, including one frame matched to a head shot of Richard Madonna, a professor at the SUNY College of Optometry, with an 89 percent similarity score.
The history of AI is often told as the story of machines getting smarter over time. What's lost is the human element in the narrative, how intelligent machines are designed, trained, and powered by human minds and bodies. In this six-part series, we explore that human history of AI--how innovators, thinkers, workers, and sometimes hucksters have created algorithms that can replicate human thought and behavior (or at least appear to). While it can be exciting to be swept up by the idea of super-intelligent computers that have no need for human input, the true history of smart machines shows that our AI is only as good as we are. In the 1970s, Dr. Geoffrey Franglen of St. George's Hospital Medical School in London began writing an algorithm to screen student applications for admission.
In the United States and Europe, the debate in the artificial intelligence community has focused on the unconscious biases of those designing the technology. Recent tests showed facial recognition systems made by companies like I.B.M. and Amazon were less accurate at identifying the features of darker-skinned people. China's efforts raise starker issues. While facial recognition technology uses aspects like skin tone and face shapes to sort images in photos or videos, it must be told by humans to categorize people based on social definitions of race or ethnicity. Chinese police, with the help of the start-ups, have done that.