nixon
The Download: unraveling a death threat mystery, and AI voice recreation for musicians
Hackers made death threats against this security researcher. In April 2024, a mysterious someone using the online handles "Waifu" and "Judische" began posting death threats on Telegram and Discord channels aimed at a cybersecurity researcher named Allison Nixon. These anonymous personas targeted Nixon because she had become a formidable threat: As chief research officer at the cyber investigations firm Unit 221B, named after Sherlock Holmes's apartment, she had built a career tracking cybercriminals and helping get them arrested. Though she'd done this work for more than a decade, Nixon couldn't understand why the person behind the accounts was suddenly threatening her. And although she had taken an interest in the Waifu persona in years past for crimes he boasted about committing, he hadn't been on her radar for a while when the threats began, because she was tracking other targets. Now Nixon resolved to unmask Waifu/Judische and others responsible for the death threats--and take them down for crimes they admitted to committing.
- Asia > China (0.16)
- North America > United States > Massachusetts (0.05)
- North America > United States > California > San Francisco County > San Francisco (0.05)
- (2 more...)
- Law > Criminal Law (1.00)
- Law Enforcement & Public Safety > Crime Prevention & Enforcement (1.00)
- Information Technology > Security & Privacy (1.00)
- Health & Medicine > Therapeutic Area > Neurology (0.48)
The U.S. tried permanent daylight saving time--and hated it
The U.S. tried permanent daylight saving time--and hated it In 1974, America set its clocks forward for good in the name of energy savings. Between January and September in 1974, President Richard Nixon made daylight saving time permanent for a brief period. Breakthroughs, discoveries, and DIY tips sent every weekday. As fall approaches, so too does the end of daylight savings time (DST). On November 2nd, the hour between 1 a.m. and 2 a.m. will happen twice.
- Europe > Germany (0.05)
- Europe > United Kingdom (0.05)
- North America > United States > Alaska (0.05)
- (3 more...)
CIA is set to roll out its own version of ChatGPT to try and comb the internet for useful clues and potential security threats
The CIA is set to launch its own ChatGPT-style AI tool to help sift through mountains of data for clues in ongoing investigations. Intended to mirror the famed OpenAI tech, the Central Intelligence Agency's latest initiative will use artificial intelligence to help analysts better access open-source intelligence, agency officials said. The CIA's Open Source Enterprise division developed the tech, which is also intended to be rolled out across the US government's 18 intelligence agencies in an effort to rival China's growing intelligence capabilities. 'We've gone from newspapers and radio, to newspapers and television, to newspapers and cable television, to basic internet, to big data, and it just keeps going,' said Randy Nixon, director of the CIA's AI division. Nixon noted that analyzing the level of data across the web is a significant challenge that the AI program would help handle, adding: 'We have to find the needles in the needle field.'
Even the CIA is developing an AI chatbot
The CIA and other US intelligence agencies will soon have an AI chatbot similar to ChatGPT. The program, revealed on Tuesday by Bloomberg, will train on publicly available data and provide sources alongside its answers so agents can confirm their validity. The aim is for US spies to more easily sift through ever-growing troves of information, although the exact nature of what constitutes "public data" could spark some thorny privacy issues. "We've gone from newspapers and radio, to newspapers and television, to newspapers and cable television, to basic internet, to big data, and it just keeps going," Randy Nixon, the CIA's director of Open Source Enterprise, said in an interview with Bloomberg. "We have to find the needles in the needle field."
I Cloned My Voice and My Mother Couldn't Tell the Difference
This article is from Understanding AI, a newsletter that explores how A.I. works and how it's changing our world. A couple of weeks ago, I used A.I. software to clone my voice. The resulting audio sounded pretty convincing to me, but I wanted to see what others thought. So I created a test audio file based on the first 12 paragraphs of this article that I wrote. Seven randomly chosen paragraphs were my real voice, while the other five were generated by A.I. I asked members of my family to see if they could tell the difference.
- North America > United States (0.14)
- Europe > Ireland (0.05)
- North America > Canada > Saskatchewan > Regina (0.04)
- Information Technology > Security & Privacy (1.00)
- Media (0.95)
- Government (0.95)
How Do We Know What's Real in the Era of the Deepfake?
Through an overwhelming smorgasbord of archival footage, viral videos, documentary excerpts, and one immersive work, curators Barbara Miller and Joshua Glick posit that the antidote to misinformation is context. The show guides visitors through substantial evidence with which they can think more critically about what informs their beliefs. The entry room alone contains nine flickering artifacts in a chronology of "deepfakes," while a parallel hallway is lined with contemporary examples. A deepfake is a video in which real footage has been convincingly manipulated, sometimes with insidious ideological aims. John Lennon can advertise a podcast.
- Media (1.00)
- Leisure & Entertainment (0.98)
- Information Technology > Security & Privacy (0.97)
Why We Need AI That Explains Itself
One of the hottest new trends in software could be artificial intelligence (AI) which explains how it accomplishes its results. Explainable AI is paying off as software companies try to make AI more understandable. LinkedIn recently increased its subscription revenue after using AI that predicted clients at risk of canceling and described how it arrived at its conclusions. "Explainable AI is about being able to trust the output as well as understand how the machine got there," Travis Nixon, the CEO of SynerAI and Chief Data Science, Financial Services at Microsoft, told Lifewire in an email interview. "'How?' is a question posed to many AI systems, especially when decisions are made or outputs are produced that aren't ideal," Nixon added.
MIT deepfake shows Nixon sadly saying the Moon astronauts died
Because the mission succeeded, Nixon never delivered the speech, but MIT engineers used deepfake technology to create a news broadcast in which a digitally-reconstructed Nixon delivers the bad news, WBUR News reports. The deepfake, which will be presented at a film festival Friday, illustrates just how easy it is to make virtual puppets deliver convincing speeches, even if they're totally removed from history. Francesca Panetta, co-director of the larger film in which the deepfake appears, told WBUR that she had someone actually read the script while impersonating Nixon's intonation and then used software to make the recording sound even more like Nixon's voice. It's not the most advanced way to create deepfakes out there, but it still gets the job done. "I had one person say, 'Oh, so you got an impersonator to impersonate Nixon,'" she told WBUR.
The year deepfakes went mainstream
In 2018, Sam Cole, a reporter at Motherboard, discovered a new and disturbing corner of the internet. A Reddit user by the name of "deepfakes" was posting nonconsensual fake porn videos using an AI algorithm to swap celebrities' faces into real porn. Cole sounded the alarm on the phenomenon, right as the technology was about to explode. A year later, deepfake porn had spread far beyond Reddit, with easily accessible apps that could "strip" clothes off any woman photographed. Since then deepfakes have had a bad rap, and rightly so.
- Information Technology > Security & Privacy (1.00)
- Media > News (0.74)
Inside the strange new world of being a deepfake actor
While deepfakes have now been around for a number of years, deepfake casting and acting are relatively new. Early deepfake technologies weren't very good, used primarily in dark corners of the internet to swap celebrities into porn videos without their consent. But as deepfakes have grown increasingly realistic, more and more artists and filmmakers have begun using them in broadcast-quality productions and TV ads. This means hiring real actors for one aspect of the performance or another. Some jobs require an actor to provide "base" footage; others need a voice.
- Asia > North Korea (0.32)
- Asia > Russia (0.16)
- North America > United States (0.05)
- Information Technology > Security & Privacy (1.00)
- Government > Regional Government > Asia Government (0.50)