After clearing the institute's security, I was told to wait in a lobby monitored by cameras. On its walls were posters of China's most consequential postwar leaders. He looked serene, as though satisfied with having freed China from the Western yoke. Next to him was a fuzzy black-and-white shot of Deng Xiaoping visiting the institute in his later years, after his economic reforms had set China on a course to reclaim its traditional global role as a great power. The lobby's most prominent poster depicted Xi Jinping in a crisp black suit.
As each story has emerged of a Black life violently ended by law enforcement, white nationalists, or other forms of interpersonal violence, a multiracial movement for Black lives, led by Black activists, has kept pace. What has also kept pace are the disturbing and highly advanced police technologies used to spy on these activists. My mother survived the surveillance of the FBI's counterintelligence program as a civil-rights activist in the 1960s. As a second-generation Black activist, I'm tired of being spied on by the police. In June, in the midst of a mushrooming protest movement against increasingly visible police killings of Black people and a simultaneously exploding coronavirus pandemic that is taking Black lives at a disproportionate rate, IBM made the surprising announcement that it would stop selling, researching, or developing facial-recognition services.
Surely a system smart enough to contribute to The New Yorker would have no trouble completing the sentence with the obvious word, fire. In another attempt, it suggested that dropping matches on logs in a fireplace would start an "irc channel full of people." Commonsense reasoning--the ability to make mundane inferences using basic knowledge about the world, like the fact that "matches" plus "logs" usually equals "fire"--has resisted AI researchers' efforts for decades. Marcus posted the exchanges to his Twitter account with his own added commentary: "LMAO," internet slang for a derisive chortle. Neural networks might be impressive linguistic mimics, but they clearly lack basic common sense.
At the start of the year, Andrew "Boz" Bosworth, who led Facebook's ad team during the 2016 election, wrote that Trump "ran the single best digital ad campaign I've ever seen from any advertiser." Trump's team agrees, of course. But that might not mean what you think it does. He didn't do it via microtargeting--the ability to send highly differentiated audiences just the right messages to change attitudes or inspire action--either, despite conventional understanding. His campaign did so via pure, blunt constancy, using Facebook in exactly the way the tech giant intended: pouring heaps of money and data into Facebook's automated advertising system.
Hundreds of human reviewers across the globe, from Romania to Venezuela, listen to audio clips recorded from Amazon Echo speakers, usually without owners' knowledge, Bloomberg reported last week. We knew Alexa was listening; now we know someone else is, too. This global review team fine-tunes the Amazon Echo's software by listening to clips of users asking Alexa questions or issuing commands, and then verifying whether Alexa responded appropriately. The team also annotates specific words the device struggles with when it's addressed in different accents. According to Amazon, users can opt out of the service, but they seem to be enrolled automatically.
"Abstinence ... Animal rights ... Very conservative ... Marijuana OK ... Children should be given guidelines ... Religion guides my life ... Make charitable contributions ... Would initiate hugs if I wasn't so shy ... Enjoy a good argument ... Have to-do lists that seldom get done ... Sweet food, baked goods ... Artificial or missing limbs ... Over 300 pounds ... Drag ... Exploring my orientation ... Women should pay." By the fall of 1994, Gary Kremen was working toward launching the first dating site online, Match.com. There was another four-letter word for love, he knew, and it was data, the stuff he would use to match people. No one had done this, so he had to start from scratch, drawing on instinct and his own dating experience.
As you scroll through a website--say, TheAtlantic.com--you're Your eyes dart from headline to headline, bypassing a few before choosing which to read. Your brow furrows at one article. Your face flushes in anger when you watch a charged video on an issue important to you. Usually, all these physical cues go nowhere other than the reflection of your computer screen.
Hospitals across the nation are piloting voice-enabled smart speakers in patients' rooms, including Cedars-Sinai Medical Center in Los Angeles and Boston Children's Hospital. These institutions are hoping that smart speakers will make patients more comfortable, help staff stay organized, and, in some cases, keep people out of hospitals and emergency rooms altogether. Early results are promising, but health-care providers are still figuring how to protect privacy once smart speakers know our intimate medical details. Searching online for medical help, even for common ailments, already reveals much more than people realize. That data has proved valuable both to health officials and to big businesses.
The images are huge and square and harrowing: a form, reminiscent of a face, engulfed in fiery red-and-yellow currents; a head emerging from a cape collared with glitchy feathers, from which a shape suggestive of a hand protrudes; a heap of gold and scarlet mottles, convincing as fabric, propping up a face with grievous, angular features. These are part of "Faceless Portraits Transcending Time," an exhibition of prints recently shown at the HG Contemporary gallery in Chelsea, the epicenter of New York's contemporary-art world. All of them were created by a computer. The catalog calls the show a "collaboration between an artificial intelligence named AICAN and its creator, Dr. Ahmed Elgammal," a move meant to spotlight, and anthropomorphize, the machine-learning algorithm that did most of the work. According to HG Contemporary, it's the first solo gallery exhibit devoted to an AI artist.
Fears about how robots might transform our lives have been a staple of science fiction for decades. In the 1940s, when widespread interaction between humans and artificial intelligence still seemed a distant prospect, Isaac Asimov posited his famous Three Laws of Robotics, which were intended to keep robots from hurting us. The first--"a robot may not injure a human being or, through inaction, allow a human being to come to harm"--followed from the understanding that robots would affect humans via direct interaction, for good and for ill. Think of classic sci-fi depictions: C-3PO and R2-D2 working with the Rebel Alliance to thwart the Empire in Star Wars, say, or HAL 9000 from 2001: A Space Odyssey and Ava from Ex Machina plotting to murder their ostensible masters. But these imaginings were not focused on AI's broader and potentially more significant social effects--the ways AI could affect how we humans interact with one another.