Social Media
5 AI prompts to put serious money in your pocket
A majority of small businesses are using artificial intelligence and finding out it can save time and money. So, you want to start making money using AI but you're not trying to build Skynet or learn 15 coding languages first? Good, because neither am I. You don't need to become the next Sam Altman or have a Ph.D. in machine learning to turn artificial intelligence into real income. What you do need is curiosity, a dash of creativity, and the right prompts.
Tinder is testing a height preference, putting an end to short king spring
Tinder's incoming CEO wants to rid the app of its hookup app reputation, but the app is testing a pretty superficial preference: height. In recent days, users have started noticing a height "filter" in the app. Another dating app, Hinge, already had a height filter for premium users. Both Tinder and Hinge are owned by Match Group. Apparently, though, height is being tested as a paid preference, not a hard filter.
Continual Learning with Global Alignment
Continual learning aims to sequentially learn new tasks without forgetting previous tasks' knowledge (catastrophic forgetting). One factor that can cause forgetting is the interference between the gradients on losses from different tasks. When the gradients on the current task's loss are in opposing directions to those on previous tasks' losses, updating the model for the current task may cause performance degradation on previous tasks. In this paper, we first identify causes of the above interference, and hypothesize that correlations between data representations are a key factor of interference. We then propose a method for promoting appropriate correlations between arbitrary tasks' data representations (i.e., global alignment) in individual task learning. Specifically, we learn the data representation as a taskspecific composition of pre-trained token representations shared across all tasks.
It's the End of the World (And It's Their Fault)
It's late morning on a Monday in March and I am, for reasons I will explain momentarily, in a private bowling alley deep in the bowels of a 65 million mansion in Utah. Jesse Armstrong, the showrunner of HBO's hit series Succession, approaches me, monitor headphones around his neck and a wide grin on his face. "I take it you've seen the news," he says, flashing his phone and what appears to be his X feed in my direction. Everyone had: An hour earlier, my boss Jeffrey Goldberg had published a story revealing that U.S. national-security leaders had accidentally added him to a Signal group chat where they discussed their plans to conduct then-upcoming military strikes in Yemen. "Incredibly fucking depressing," Armstrong said.
DreamShard: Generalizable Embedding Table Placement for Recommender Systems 2
We study embedding table placement for distributed recommender systems, which aims to partition and place the tables on multiple hardware devices (e.g., GPUs) to balance the computation and communication costs. Although prior work has explored learning-based approaches for the device placement of computational graphs, embedding table placement remains to be a challenging problem because of 1) the operation fusion of embedding tables, and 2) the generalizability requirement on unseen placement tasks with different numbers of tables and/or devices.
2 Preliminary. We use A E to denote the existence of an edge between node u and v, otherwise A
Graph homophily refers to the phenomenon that connected nodes tend to share similar characteristics. Understanding this concept and its related metrics is crucial for designing effective Graph Neural Networks (GNNs). The most widely used homophily metrics, such as edge or node homophily, quantify such "similarity" as label consistency across the graph topology. These metrics are believed to be able to reflect the performance of GNNs, especially on node-level tasks. However, many recent studies have empirically demonstrated that the performance of GNNs does not always align with homophily metrics, and how homophily influences GNNs still remains unclear and controversial.
7dfcaf4512bbf2a807a783b90afb6c09-Paper-Datasets_and_Benchmarks_Track.pdf
Recent advancements in text-to-speech (TTS) synthesis show that large-scale models trained with extensive web data produce highly natural-sounding output. However, such data is scarce for Indian languages due to the lack of high-quality, manually subtitled data on platforms like LibriVox or YouTube. To address this gap, we enhance existing large-scale ASR datasets containing natural conversations collected in low-quality environments to generate high-quality TTS training data. Our pipeline leverages the cross-lingual generalization of denoising and speech enhancement models trained on English and applied to Indian languages. This results in IndicVoices-R (IV-R), the largest multilingual Indian TTS dataset derived from an ASR dataset, with 1,704 hours of high-quality speech from 10,496 speakers across 22 Indian languages.
How to access and download your Facebook data
Founder and Hedgehog CEO John Matze joined'FOX & Friends First' to discuss his optimism surrounding the community notes program, staying competitive globally with AI and the possibility of Oracle buying TikTok. Reviewing your Facebook data allows you to see what personal information Facebook has collected about you, helping you make informed decisions about your privacy settings. You might also need a copy of your data, which serves as a backup of your photos, messages and memories in case you lose access to your account or decide to delete it. Additionally, understanding what data Facebook stores can help you better comprehend how the platform uses your information for advertising and content personalization. Here's how to do it.