Not enough data to create a plot.
Try a different view from the menu above.
Industry
I'm a Public-School English Teacher. The Most Vocal Defenders of K–12 Liberal Arts Are Not Who You'd Expect.
Sign up for the Slatest to get the most insightful analysis, criticism, and advice out there, delivered to your inbox daily. On May 6, the Texas House Committee on Public Education discussed S.B. 13, a bill seeking to remove from public school libraries and classrooms all "profane" and "indecent content." At the hearing, Republican Rep. Terri Leo-Wilson focused on the concern that the legislation could harm the transmission of cultural heritage by depriving students of "classics." She explained, using an adjective that in our current culture wars has come to describe a type of humanities education favored by conservatives, that her "kids were classically trained, so they had their graduation picture with all sorts of books … classic works of literature." When an activist commenting during the hearing remarked that among renowned writers, Toni Morrison's work is singularly "very sexualized," Leo-Wilson replied, without reference to any one book, "She might be famous, but that's not considered, I don't think, a classic."
Most AI chatbots devour your user data - these are the worst offenders
Like many people today, you may turn to AI to answer questions, generate content, and gather information. But as they say, there's always a price to pay. In the case of AI, that means user data. In a new report, VPN and security service Surfshark analyzed what types of data various AIs collect from you and which ones scoop up the greatest amount. For its report, Surfshark looked at 10 popular AI chatbots -- ChatGPT, Claude AI, DeepSeek, Google Gemini, Grok, Jasper, Meta AI, Microsoft Copilot, Perplexity, Pi, and Poe.
MSAGPT: Neural Prompting Protein Structure Prediction via MSA Generative Pre-Training
Multiple Sequence Alignment (MSA) plays a pivotal role in unveiling the evolutionary trajectories of protein families. The accuracy of protein structure predictions is often compromised for protein sequences that lack sufficient homologous information to construct high-quality MSA. Although various methods have been proposed to generate virtual MSA under these conditions, they fall short in comprehensively capturing the intricate co-evolutionary patterns within MSA or require guidance from external oracle models. Here we introduce MSAGPT, a novel approach to prompt protein structure predictions via MSA generative pre-training in the low-MSA regime. MSAGPT employs a simple yet effective 2D evolutionary positional encoding scheme to model the complex evolutionary patterns.
Let's Talk About ChatGPT and Cheating in the Classroom
There's been a lot of talk about how AI tools like ChatGPT are changing education. Students are using AI to do research, write papers, and get better grades. So today on the show, we debate whether using AI in school is actually cheating. Plus, we dive into how students and teachers are using these tools, and we ask what place AI should have in the future of learning. Write to us at uncannyvalley@wired.com.
Let Images Give You More: Point Cloud Cross-Modal Training for Shape Analysis
Although recent point cloud analysis achieves impressive progress, the paradigm of representation learning from a single modality gradually meets its bottleneck. In this work, we take a step towards more discriminative 3D point cloud representation by fully taking advantages of images which inherently contain richer appearance information, e.g., texture, color, and shade. Specifically, this paper introduces a simple but effective point cloud cross-modality training (PointCMT) strategy, which utilizes view-images, i.e., rendered or projected 2D images of the 3D object, to boost point cloud analysis. In practice, to effectively acquire auxiliary knowledge from view images, we develop a teacher-student framework and formulate the crossmodal learning as a knowledge distillation problem. PointCMT eliminates the distribution discrepancy between different modalities through novel feature and classifier enhancement criteria and avoids potential negative transfer effectively. Note that PointCMT effectively improves the point-only representation without architecture modification. Sufficient experiments verify significant gains on various datasets using appealing backbones, i.e., equipped with PointCMT, PointNet++ and PointMLP achieve state-of-the-art performance on two benchmarks, i.e., 94.4% and 86.7% accuracy on ModelNet40 and ScanObjectNN, respectively. Code will be made available at https://github.com/ZhanHeshen/PointCMT.
Snag this 98-inch TCL 4K smart TV at Amazon for 1500 less ahead of Memorial Day
SAVE 38%: As of May 23, you can get the TCL 98-inch QM7K QD-Mini LED 4K Smart TV (98QM7K, 2025 Model) for 2,499.99, It's also the lowest price we've seen for this model. Memorial Day is just a few days away, and Amazon's offering massive discounts on TVs of all sizes, including this 98-inch TCL 4K smart TV. As of May 23, you can get the TCL 98-inch QM7K QD-Mini 4K Smart TV (98QM7K, 2025 Model) for 2,499.99, It's also the lowest price we've seen for this model.
Robots square off in world's first humanoid boxing match
Breakthroughs, discoveries, and DIY tips sent every weekday. After decades of being tortured, shoved, kicked, burned, and bludgeoned, robots are finally getting their chance to fight back. This weekend, Chinese robotics maker Unitree says it will livestream the world's first boxing match between two of its humanoid robots. The event, titled Unitree Iron Fist King: Awakening, will feature a face-off between two of Unitree's 4.3-foot-tall G1 robots. The robots will reportedly be remotely controlled by human engineers, though they are also expected to demonstrate some autonomous, pre-programmed actions as well.
Interpreting Learned Feedback Patterns in Large Language Models Luke Marks Amir Abdullah Clement Neo
Reinforcement learning from human feedback (RLHF) is widely used to train large language models (LLMs). However, it is unclear whether LLMs accurately learn the underlying preferences in human feedback data. We coin the term Learned Feedback Pattern (LFP) for patterns in an LLM's activations learned during RLHF that improve its performance on the fine-tuning task. We hypothesize that LLMs with LFPs accurately aligned to the fine-tuning feedback exhibit consistent activation patterns for outputs that would have received similar feedback during RLHF. To test this, we train probes to estimate the feedback signal implicit in the activations of a fine-tuned LLM. We then compare these estimates to the true feedback, measuring how accurate the LFPs are to the fine-tuning feedback. Our probes are trained on a condensed, sparse and interpretable representation of LLM activations, making it easier to correlate features of the input with our probe's predictions. We validate our probes by comparing the neural features they correlate with positive feedback inputs against the features GPT-4 describes and classifies as related to LFPs. Understanding LFPs can help minimize discrepancies between LLM behavior and training objectives, which is essential for the safety and alignment of LLMs.
Breaking encryption with a quantum computer just got 20 times easier
Quantum computers could crack a common data encryption technique once they have a million qubits, or quantum bits. While this is still well beyond the capabilities of existing quantum computers, this new estimate is 20 times lower than previously thought, suggesting the day encryption is cracked is closer than we think.