Not enough data to create a plot.
Try a different view from the menu above.
Most AI chatbots devour your user data - these are the worst offenders
Like many people today, you may turn to AI to answer questions, generate content, and gather information. But as they say, there's always a price to pay. In the case of AI, that means user data. In a new report, VPN and security service Surfshark analyzed what types of data various AIs collect from you and which ones scoop up the greatest amount. For its report, Surfshark looked at 10 popular AI chatbots -- ChatGPT, Claude AI, DeepSeek, Google Gemini, Grok, Jasper, Meta AI, Microsoft Copilot, Perplexity, Pi, and Poe.
Inside Anthropic's First Developer Day, Where AI Agents Took Center Stage
Anthropic's first developer conference kicked off in San Francisco on Thursday, and while the rest of the industry races toward artificial general intelligence, at Anthropic the goal of the year is deploying a "virtual collaborator" in the form of an autonomous AI agent. "We're all going to have to contend with the idea that everything you do is eventually going to be done by AI systems," Anthropic CEO Dario Amodei said in a press briefing. As roughly 500 attendees munched breakfast sandwiches with an abnormal amount of arugula, and Anthropic staffers milled about in company-issued baseball caps, Amodei took the stage with his chief product officer, Mike Krieger. "When do you think there will be the first billion-dollar company with one human employee?" Amodei, wearing a light-gray jacket and a pair of Brooks running shoes, replied without skipping a beat: "2026."
Supplementary of Weak-shot Semantic Segmentation via Dual Similarity Transfer
In this appendix, we first clarify more details about the datasets, evaluation, and implementation in Section A1, Section A2, and Section A3. Afterwards, we provide more qualitative comparisons in Section A4. Then, we conduct more experiments about pixel-pixel similarity transfer in Section A5. Finally, we conduct experiments to explore the generalization ability of our model to dataset expansion in Section A6. These two datasets both contain enough classes and abundant images, which are appropriate for exploring the problem about transfer learning across classes. Specifically, COCO-Stuff-10K [1] totally covers 171 semantic-level classes.
Weak-shot Semantic Segmentation via Dual Similarity Transfer
Semantic segmentation is an important and prevalent task, but severely suffers from the high cost of pixel-level annotations when extending to more classes in wider applications. To this end, we focus on the problem named weak-shot semantic segmentation, where the novel classes are learnt from cheaper image-level labels with the support of base classes having off-the-shelf pixel-level labels. To tackle this problem, we propose SimFormer, which performs dual similarity transfer upon MaskFormer.
MSAGPT: Neural Prompting Protein Structure Prediction via MSA Generative Pre-Training
Multiple Sequence Alignment (MSA) plays a pivotal role in unveiling the evolutionary trajectories of protein families. The accuracy of protein structure predictions is often compromised for protein sequences that lack sufficient homologous information to construct high-quality MSA. Although various methods have been proposed to generate virtual MSA under these conditions, they fall short in comprehensively capturing the intricate co-evolutionary patterns within MSA or require guidance from external oracle models. Here we introduce MSAGPT, a novel approach to prompt protein structure predictions via MSA generative pre-training in the low-MSA regime. MSAGPT employs a simple yet effective 2D evolutionary positional encoding scheme to model the complex evolutionary patterns.
Let's Talk About ChatGPT and Cheating in the Classroom
There's been a lot of talk about how AI tools like ChatGPT are changing education. Students are using AI to do research, write papers, and get better grades. So today on the show, we debate whether using AI in school is actually cheating. Plus, we dive into how students and teachers are using these tools, and we ask what place AI should have in the future of learning. Write to us at uncannyvalley@wired.com.
Adapting Neural Link Predictors for Data-Efficient Complex Query Answering Erik Arakelyan 1 Pasquale Minervini 2 Daniel Daza 3,4,5 Michael Cochez 3,5
Answering complex queries on incomplete knowledge graphs is a challenging task where a model needs to answer complex logical queries in the presence of missing knowledge. Prior work in the literature has proposed to address this problem by designing architectures trained end-to-end for the complex query answering task with a reasoning process that is hard to interpret while requiring data and resource-intensive training. Other lines of research have proposed re-using simple neural link predictors to answer complex queries, reducing the amount of training data by orders of magnitude while providing interpretable answers. The neural link predictor used in such approaches is not explicitly optimised for the complex query answering task, implying that its scores are not calibrated to interact together.
Let Images Give You More: Point Cloud Cross-Modal Training for Shape Analysis
Although recent point cloud analysis achieves impressive progress, the paradigm of representation learning from a single modality gradually meets its bottleneck. In this work, we take a step towards more discriminative 3D point cloud representation by fully taking advantages of images which inherently contain richer appearance information, e.g., texture, color, and shade. Specifically, this paper introduces a simple but effective point cloud cross-modality training (PointCMT) strategy, which utilizes view-images, i.e., rendered or projected 2D images of the 3D object, to boost point cloud analysis. In practice, to effectively acquire auxiliary knowledge from view images, we develop a teacher-student framework and formulate the crossmodal learning as a knowledge distillation problem. PointCMT eliminates the distribution discrepancy between different modalities through novel feature and classifier enhancement criteria and avoids potential negative transfer effectively. Note that PointCMT effectively improves the point-only representation without architecture modification. Sufficient experiments verify significant gains on various datasets using appealing backbones, i.e., equipped with PointCMT, PointNet++ and PointMLP achieve state-of-the-art performance on two benchmarks, i.e., 94.4% and 86.7% accuracy on ModelNet40 and ScanObjectNN, respectively. Code will be made available at https://github.com/ZhanHeshen/PointCMT.
Understanding the Role of Momentum in Stochastic Gradient Methods
Igor Gitman, Hunter Lang, Pengchuan Zhang, Lin Xiao
The use of momentum in stochastic gradient methods has become a widespread practice in machine learning. Different variants of momentum, including heavyball momentum, Nesterov's accelerated gradient (NAG), and quasi-hyperbolic momentum (QHM), have demonstrated success on various tasks. Despite these empirical successes, there is a lack of clear understanding of how the momentum parameters affect convergence and various performance measures of different algorithms. In this paper, we use the general formulation of QHM to give a unified analysis of several popular algorithms, covering their asymptotic convergence conditions, stability regions, and properties of their stationary distributions. In addition, by combining the results on convergence rates and stationary distributions, we obtain sometimes counter-intuitive practical guidelines for setting the learning rate and momentum parameters.