Goto

Collaborating Authors

 upload




Bridging Online Behavior and Clinical Insight: A Longitudinal LLM-based Study of Suicidality on YouTube Reveals Novel Digital Markers

Sobol, Ilanit, Lissak, Shir, Tikochinski, Refael, Nakash, Tal, Klomek, Anat Brunstein, Fruchter, Eyal, Reichart, Roi

arXiv.org Artificial Intelligence

Suicide remains a leading cause of death in Western countries. As social media becomes central to daily life, digital footprints offer valuable insight into suicidal behavior. Focusing on individuals who attempted suicide while uploading videos to their channels, we investigate: How do linguistic patterns on YouTube reflect suicidal behavior, and how do these patterns align with or differ from expert knowledge? We examined linguistic changes around suicide attempts and compared individuals who attempted suicide while actively uploading to their channel with three control groups: those with prior attempts, those experiencing major life events, and matched individuals from the broader cohort. Applying complementary bottom-up, hybrid, and expert-driven approaches, we analyzed a novel longitudinal dataset of 181 suicide-attempt channels and 134 controls. In the bottom-up analysis, LLM-based topic-modeling identified 166 topics; five were linked to suicide attempts, two also showed attempt-related temporal changes (Mental Health Struggles, $OR = 1.74$; YouTube Engagement, $OR = 1.67$; $p < .01$). In the hybrid approach, clinical experts reviewed LLM-derived topics and flagged 19 as suicide-related. However, none showed significant effects beyond those identified bottom-up. YouTube Engagement, a platform-specific indicator, was not flagged, underscoring the value of bottom-up discovery. A top-down psychological assessment of suicide narratives revealed differing motivations: individuals describing prior attempts aimed to help others ($β=-1.69$, $p<.01$), whereas those attempted during the uploading period emphasized personal recovery ($β=1.08$, $p<.01$). By integrating these approaches, we offer a nuanced understanding of suicidality, bridging digital behavior and clinical insights.


What Is Adobe Firefly? Here's How to Use This Powerful Generative AI Tool

WIRED

Adobe Firefly is a deceptively powerful AI playground to generate images, videos, and more. Here's how to make the most of it. All products featured on WIRED are independently selected by our editors. However, we may receive compensation from retailers and/or from purchases of products through these links. Adobe Firefly feels like the best-kept secret in software right now.


If you could upload your mind to a virtual utopia, would you?

New Scientist

"What does it really mean to upload your consciousness into intangible space?" In, the characters face an impossible choice: upload your mind into a virtual utopia, or crumble away in the abandoned physical world. Mind-uploading is familiar to us as a science fiction trope, often anchoring relationship dramas and philosophical inquiry. But what does it really mean to upload your consciousness into intangible space? Can the mechanics be extrapolated from our present-day science?



FedOC: Multi-Server FL with Overlapping Client Relays in Wireless Edge Networks

Ji, Yun, Chen, Zeyu, Zhong, Xiaoxiong, Ma, Yanan, Zhang, Sheng, Fang, Yuguang

arXiv.org Artificial Intelligence

Multi-server Federated Learning (FL) has emerged as a promising solution to mitigate communication bottlenecks of single-server FL. We focus on a typical multi-server FL architecture, where the regions covered by different edge servers (ESs) may overlap. A key observation of this architecture is that clients located in the overlapping areas can access edge models from multiple ESs. Building on this insight, we propose FedOC (Federated learning with Overlapping Clients), a novel framework designed to fully exploit the potential of these overlapping clients. In FedOC, overlapping clients could serve dual roles: (1) as Relay Overlapping Clients (ROCs), they forward edge models between neighboring ESs in real time to facilitate model sharing among different ESs; and (2) as Normal Overlapping Clients (NOCs), they dynamically select their initial model for local training based on the edge model delivery time, which enables indirect data fusion among different regions of ESs. The overall FedOC workflow proceeds as follows: in every round, each client trains local model based on the earliest received edge model and transmits to the respective ESs for model aggregation. Then each ES transmits the aggregated edge model to neighboring ESs through ROC relaying. Upon receiving the relayed models, each ES performs a second aggregation and subsequently broadcasts the updated model to covered clients. The existence of ROCs enables the model of each ES to be disseminated to the other ESs in a decentralized manner, which indirectly achieves intercell model and speeding up the training process, making it well-suited for latency-sensitive edge environments. Extensive experimental results show remarkable performance gains of our scheme compared to existing methods.


YouTube Thinks AI Is Its Next Big Bang

WIRED

On its 20th anniversary, YouTube is venturing into an era of AI-generated video, and may never be the same. Google figured out early on that video would be a great addition to its search business, so in 2005 it launched Google Video. Focused on making deals with the entertainment industry for second-rate content, and overly cautious on what users could upload, it flopped . In 2006, Google snapped up that year-old company, figuring it would sort out the IP stuff later. Though the $1.65 billion purchase price for YouTube was about a billion dollars more than its valuation, it was one of the greatest bargains ever.


Hackers can hide AI prompt injection attacks in resized images

PCWorld

"AI" tools are all the rage at the moment, even among users who aren't all that savvy when it comes to conventional software or security--and that's opening up all sorts of new opportunities for hackers and others who want to take advantage of them. A new research team has discovered a way to hide prompt injection attacks in uploaded images. A prompt injection attack is a way to hide instructions for an LLM or other "artificial intelligence" system, usually somewhere a human operator can't see them. It's the whispered "loser-says-what" of computer security. A great example is hiding a phishing attempt in an email in plain text that's colored the same as the background, knowing that Gemini will summarize the text even though the human recipient can't read it.


Google Gemini is getting creepier by using your uploads to train AI

PCWorld

Google Gemini continues to push the limits of what it knows about you. On Wednesday, Google's big initiative was a way to stop Gemini from learning more about you, while notifying users that content you share with it may be used as a foundation for chats with other users. "In the coming weeks, your'Gemini Apps Activity' setting will be renamed'Keep Activity,'" Google said in a blog post. "When this setting is on, a sample of your future uploads will be used to help improve Google services for everyone." Today, Google is allowing Gemini to remember what it knows about you, and this behavior is on by default. "When this setting is on, Gemini remembers key details and preferences you've shared, leading to more natural and relevant conversations, as if you're collaborating with a partner who's already up to speed," Google said.