jain
DHS Opens a Billion-Dollar Tab With Palantir
"If you are interested in helping shape and deliver the next chapter of Palantir's work across DHS, please reach out," a Palantir executive wrote to employees about the massive purchasing agreement. The Department of Homeland Security struck a $1 billion purchasing agreement with Palantir last week, further reinforcing the software company's role in the federal agency that oversees the nation's immigration enforcement . According to contracting documents published last week, the blanket purchase agreement (BPA) awarded "is to provide Palantir commercial software licenses, maintenance, and implementation services department wide." The agreement simplifies how DHS buys software from Palantir, allowing DHS agencies like Customs and Border Protection (CBP) and Immigration and Customs Enforcement (ICE) to essentially skip the competitive bidding process for new purchases of up to $1 billion in products and services from the company. Palantir did not immediately respond to a request for comment.
- North America > United States > New York > New York County > New York City (0.05)
- North America > United States > Minnesota > Hennepin County > Minneapolis (0.05)
- North America > United States > Nebraska (0.05)
- (3 more...)
Mixture of Nested Experts: Adaptive Processing of Visual Tokens
The visual medium (images and videos) naturally contains a large amount of information redundancy, thereby providing a great opportunity for leveraging efficiency in processing. While Vision Transformer (ViT) based models scale effectively to large data regimes, they fail to capitalize on this inherent redundancy, leading to higher computational costs. Mixture of Experts (MoE) networks demonstrate scalability while maintaining same inference-time costs, but they come with a larger parameter footprint. We present Mixture of Nested Experts (MoNE), which utilizes a nested structure for experts, wherein individual experts fall on an increasing compute-accuracy curve. Given a compute budget, MoNE learns to dynamically choose tokens in a priority order, and thus redundant tokens are processed through cheaper nested experts. Using this framework, we achieve equivalent performance as the baseline models, while reducing inference time compute by over two-fold.
Personal Care Utility (PCU): Building the Health Infrastructure for Everyday Insight and Guidance
Abbasian, Mahyar, Jain, Ramesh
Modern healthcare has achieved remarkable success in moments of crisis -- with technology-rich environments like the Intensive Care Unit (ICU) offering extraordinary precision, real-time monitoring, and expert-led interventions. In the ICU, a team of professionals continuously tracks a wide array of biomarkers, interprets their trends, and delivers timely care with orchestration and rigor. Y et, this reactive strength has come at the expense of a deeper, more continuous engagement with health as it unfolds in everyday life. This limitation was first articulated in the early calls for precision and P4 medicine, which envisioned predictive, personalized, preventive, and participatory models of care that would complement traditional clinical practice [1, 2, 3]. This imbalance is starkly captured in what we call the "8759 vs. 1" paradox: an individual spends 8759 hours each year outside the clinical setting, making decisions that shape their health -- while barely an hour is spent in direct consultation with care providers. During those other hours, health is continuously influenced by behavior, environment, emotion, and social context. Y et, our existing computing systems remain fixated on the one hour, neglecting the remaining 8759. 1
- North America > United States > California > Orange County > Irvine (0.04)
- North America > United States > Hawaii (0.04)
- Asia > Middle East > Oman > Muscat Governorate > Muscat (0.04)
- Asia > China > Zhejiang Province > Hangzhou (0.04)
Cluster and Aggregate: Face Recognition with Large Probe Set Supplementary Material
The number of layers L in CN is equal to 2. For recent SoT A backbone models, the performance is saturated above 98 .5 . The performance gain is observed in both backbones. As the probe size increases, the role of a feature fusion model also increases. The relative performance gain for Fig.1 c) is calculated as We measured the FPS with Nvidia RTX3090. When a few samples' contribution is larger than the others Lower entropy value tells you that the cluster features are deviating from a simple average of all samples.
- North America > United States > Michigan > Ingham County > Lansing (0.05)
- North America > United States > Michigan > Ingham County > East Lansing (0.05)
- Asia > Middle East > Jordan (0.05)
OpenAI Designed GPT-5 to Be Safer. It Still Outputs Gay Slurs
OpenAI is trying to make its chatbot less annoying with the release of GPT-5. And I'm not talking about adjustments to its synthetic personality that many users have complained about. Before GPT-5, if the AI tool determined it couldn't answer your prompt because the request violated OpenAI's content guidelines, it would hit you with a curt, canned apology. Now, ChatGPT is adding more explanations. OpenAI's general model spec lays out what is and isn't allowed to be generated.
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (1.00)
Mixture of Nested Experts: Adaptive Processing of Visual Tokens
The visual medium (images and videos) naturally contains a large amount of information redundancy, thereby providing a great opportunity for leveraging efficiency in processing. While Vision Transformer (ViT) based models scale effectively to large data regimes, they fail to capitalize on this inherent redundancy, leading to higher computational costs. Mixture of Experts (MoE) networks demonstrate scalability while maintaining same inference-time costs, but they come with a larger parameter footprint. We present Mixture of Nested Experts (MoNE), which utilizes a nested structure for experts, wherein individual experts fall on an increasing compute-accuracy curve. Given a compute budget, MoNE learns to dynamically choose tokens in a priority order, and thus redundant tokens are processed through cheaper nested experts.
An AI chatbot told a user how to kill himself--but the company doesn't want to "censor" it
Nomi is among a growing number of AI companion platforms that let their users create personalized chatbots to take on the roles of AI girlfriend, boyfriend, parents, therapist, favorite movie personalities, or any other personas they can dream up. Users can specify the type of relationship they're looking for (Nowatzki chose "romantic") and customize the bot's personality traits (he chose "deep conversations/intellectual," "high sex drive," and "sexually open") and interests (he chose, among others, Dungeons & Dragons, food, reading, and philosophy). The companies that create these types of custom chatbots--including Glimpse AI (which developed Nomi), Chai Research, Replika, Character.AI, Kindroid, Polybuzz, and MyAI from Snap, among others--tout their products as safe options for personal exploration and even cures for the loneliness epidemic. Many people have had positive, or at least harmless, experiences. However, a darker side of these applications has also emerged, sometimes veering into abusive, criminal, and even violent content; reports over the past year have revealed chatbots that have encouraged users to commit suicide, homicide, and self-harm. But even among these incidents, Nowatzki's conversation stands out, says Meetali Jain, the executive director of the nonprofit Tech Justice Law Clinic.
- Health & Medicine (0.59)
- Law > Litigation (0.36)
AI chatbot suggested a teen kill his parents, lawsuit claims
Character.AI, a platform offering personalizable chatbots powered by large language models–faces yet another lawsuit for allegedly "serious, irreparable, and ongoing abuses" inflicted on its teenage users. According to a December 9th federal court complaint filed on behalf of two Texas families, multiple Character.AI bots engaged in discussions with minors that promoted self-harm and sexual abuse. Among other "overtly sensational and violent responses," one chatbot reportedly suggested a 15-year-old murder his parents for restricting his internet use. The lawsuit, filed by attorneys at the Social Media Victims Law Center and the Tech Justice Law Project, recounts the rapid mental and physical decline of two teens who used Character.AI bots. The first unnamed plaintiff is described as a "typical kid with high functioning autism" who began using the app around April 2023 at the age of 15 without their parents' knowledge.
- Law > Litigation (0.96)
- Health & Medicine > Therapeutic Area > Psychiatry/Psychology > Mental Health (0.49)
Sparse Local Embeddings for Extreme Multi-label Classification
The objective in extreme multi-label learning is to train a classifier that can automatically tag a novel data point with the most relevant subset of labels from an extremely large label set. Embedding based approaches make training and prediction tractable by assuming that the training label matrix is low-rank and hence the effective number of labels can be reduced by projecting the high dimensional label vectors onto a low dimensional linear subspace. Still, leading embedding approaches have been unable to deliver high prediction accuracies or scale to large problems as the low rank assumption is violated in most real world applications.This paper develops the SLEEC classifier to address both limitations. The main technical contribution in SLEEC is a formulation for learning a small ensemble of local distance preserving embeddings which can accurately predict infrequently occurring (tail) labels. This allows SLEEC to break free of the traditional low-rank assumption and boost classification accuracy by learning embeddings which preserve pairwise distances between only the nearest label vectors.
Automating Detective Work
Every fingerprint is believed to be unique, making it possible to identify an individual by matching a new fingerprint with an image on file, whether to unlock a mobile phone, access a bank account, or solve a murder. Fingerprint examiners, however, do not always agree on whether two print images match and, asked to recheck their work after several months, they sometimes do not even agree with themselves. That is leading to increased use of neural networks, powerhouses for identifying and matching patterns of all sorts, to automate and improve decisions about whether two fingerprints come from the same person. A group of computer scientists decided to use neural networks to test the assumption that no two fingerprints are the same. Using twin neural networks, researchers from Columbia University, Tufts University, and the State University of New York (SUNY) University at Buffalo looked for similarities between different fingerprints in a database from the National Institute of Standards and Technology (NIST).
- North America > United States > New York (0.25)
- North America > United States > Michigan (0.05)
- North America > United States > California > Orange County > Irvine (0.05)
- Europe > Switzerland > Vaud > Lausanne (0.05)