Goto

Collaborating Authors

Social Media


Communist party accessed TikTok data of Hong Kong protesters, former executive alleges

The Guardian

A former executive at TikTok's parent company, ByteDance, has alleged that the Chinese Communist party accessed user data from the social video app belonging to Hong Kong protesters and civil rights activists. Yintao Yu, a former head of engineering at ByteDance's US operation, claimed in a legal filing that a committee of Communist party members accessed TikTok data that included the users' network information, Sim card identifications and IP addresses in a bid to identify the individuals and their locations. The claims, in a wrongful dismissal lawsuit brought by Yu in a California court and reported by the Wall Street Journal, also allege the party accessed TikTok users' communications, monitored Hong Kong users who uploaded protest-related content and that Beijing-based ByteDance maintained a "backdoor channel" for the party to access US user data. Yu alleges in the filing that members of a Communist party committee inside ByteDance had access to a "superuser" credential which was also called a "God credential" and allowed them to view all data collected by ByteDance. The filing adds that when Yu was at ByteDance, between August 2017 and November 2018, TikTok stored all users' direct messages, search histories and content viewed by users.


Elon Musk's Neuralink wants people to control computers with their minds. How close are they?

USATODAY - Tech Top Stories

Neuralink is one step closer to selling brain implants that can transmit human thought. The neurotechnology company in May announced that it had received approval from the U.S. Food and Drug Administration (FDA) to launch its first in-human clinical trial. A statement on its Twitter account said the approval "represents an important first step that will one day allow our technology to help many people." Cofounded by Elon Musk in 2016, Neuralink plans to implant devices in human brains that would allow people with neurological disorders to control computers or robotic limbs with their minds. Musk has said he also wants to "achieve a sort of symbiosis with artificial intelligence" and possibly enable telepathic communication with the device.


Instagram may be getting its own AI chatbot soon. Here's what we know

ZDNet

Meta rapidly adopted generative AI technology and incorporated it into various features across its platforms, including ads. Now, the company is testing a new feature on Instagram. Also: 7 ways you didn't know you can use Bing Chat and other AI chatbots A Tweet revealed that Instagram is testing an AI chat option for its platform. With the new feature, users would be able to chat with an AI chatbot to ask questions and get advice in their direct messages. You'll be able to choose from 30 different personalities.


The best dating apps for recent college grads

Mashable

While online dating sites like Match.com and OKCupid emerged in the early 2000s and apps like Zoosk and Grindr followed, Tinder truly changed the online dating game with the introduction of the swipe in 2012. Recent grads have never dated in a world without dating apps, and the majority of them weren't in the dating pool prior to the "swipe." Now, Tinder has reached a point where 350 million swipes happen a day on the app. Swiping through profiles and meeting people through an app is completely routine among Gen Z. "I would say all of my single friends are at least on one of the apps," New York-based Emma Schwartz said. She's on Raya and Hinge but has tried Bumble and The Lox Club as well.


YouTube changes misinformation policy to allow videos falsely claiming fraud in the 2020 US election

Engadget

In a Friday afternoon news dump, YouTube inexplicably announced today that 2020 election denialism is a-okay. The company says it "carefully deliberated this change" without offering any specifics on its reasons for the about-face. YouTube initially banned content disputing the results of the 2020 election in December of that year. In a feeble attempt to explain its decision (first reported by Axios), YouTube wrote that it "recognized it was time to reevaluate the effects of this policy in today's changed landscape. In the current environment, we find that while removing this content does curb some misinformation, it could also have the unintended effect of curtailing political speech without meaningfully reducing the risk of violence or other real-world harm. With that in mind, and with 2024 campaigns well underway, we will stop removing content that advances false claims that widespread fraud, errors, or glitches occurred in the 2020 and other past US Presidential elections."


Help! My Friend Keeps Asking Me to "Approve" Her Dating Profiles … but She's Taken.

Slate

Dear Prudence is Slate's advice column. For this edition, Alicia Montgomery, Slate's vice president of audio, will be filling in as Prudie. My friend Kari and I have been close since we were college roommates (we are now just about 40). Kari has been with her long-distance girlfriend Lora for the last four years, and recently Lora has been talking about moving to Kari and my town in order to better facilitate having a baby. The road for them is going to be long, given the mechanics and their ages, but they have all systems go from their doctors. The problem is that I know Kari is not 100 percent committed to Lora; she says she's not sure she's the one and has built (but not, to my knowledge, deployed) dating profiles on multiple sites and expresses jealousy to me quite often about my adventurous dating life.


She Built an App to Block Harassment on Twitter. Elon Musk Killed It

TIME - Tech

Tracy Chou launched the Twitter app Block Party in 2021 to help users escape targeted harassment campaigns that she--as an Asian American woman--knew from personal experience could ostracize vulnerable voices from the public conversation. But on Wednesday Block Party closed its doors, becoming the latest victim of soaring new bills imposed by a struggling Twitter under new owner Elon Musk. Under Twitter's former ownership, Chou struck a deal with the company for free access to data--a win-win arrangement that would allow Block Party to grow and provide Twitter with a valuable anti-harassment tool to which it didn't have to devote expensive engineering time. But Chou tells TIME that following the recent expiration of that contract, Twitter wanted Block Party to pay $42,000 per month for access to enough data to keep the app running. There was no way Block Party could afford the figure, she says.


The Instagram Founders' News App Artifact Is Actually an AI Play

WIRED

Today, Artifact is taking another jump on the generative-AI rocket ship in an attempt to address an annoying problem--clickbaity headlines. The app already offers a way for users to flag clickbait stories, and if multiple people tag an article, Artifact won't spread it. But, Systrom explains, sometimes the problem isn't with the story but the headline. It might promise too much, or mislead, or lure the reader into clicking just to find some information that's held back from the headline. From the publisher's viewpoint, winning more clicks is a big plus--but it's frustrating to users, who might feel they have been manipulated.


Why Hollywood Really Fears Generative AI

WIRED

The future of Hollywood looks a lot like Deepfake Ryan Reynolds selling you a Tesla. In a video, since removed but widely shared on Twitter, the actor is bespectacled in thick black frames, his mouth mouthing independently from his face, hawking electric vehicles: "How much do you think it would cost to own a car that's this fucking awesome?" On the verisimilitude scale, the video, which originally circulated last month, registered as blatantly unreal. Then its creator, financial advice YouTuber Kevin Paffrath, revealed he had made it as a ploy to attract the gaze of Elon Musk. Elsewhere on Twitter, people beseeched Reynolds to sue.


Humans stumped on difference between real or AI-generated images: study

FOX News

People in Texas sounded off on AI job displacement, with half of the people who spoke to Fox News convinced that the tech will rob them of work. Humans have historically been excellent at identifying faces and photos compared to computers, but the advent of artificial intelligence-generated photos is throwing a curveball at humans, according to a new study that examines how people perceive fake images versus real ones. Tech experts have been warning that hyperrealistic images generated by AI could lead to the proliferation of misinformation online and cybersecurity issues. Last month, for example, panic spread after an AI-generated photo that apparently showed an explosion at the Pentagon went viral, leading to the stock market taking a short dip. Researchers in Australia examined how human brains perceive and differentiate realistic AI-generated photos using both behavioral testing and neuroimaging experiments.