image dataset
- North America > Canada (0.04)
- Asia > Singapore (0.04)
- Asia > Japan > Honshū > Chūbu > Ishikawa Prefecture > Kanazawa (0.04)
- Information Technology (0.67)
- Health & Medicine > Diagnostic Medicine > Imaging (0.30)
- Health & Medicine > Diagnostic Medicine (0.48)
- Information Technology > Security & Privacy (0.45)
- North America > United States > California (0.14)
- Asia > Japan (0.05)
- North America > United States > Massachusetts > Suffolk County > Boston (0.04)
- (2 more...)
- North America > United States > Maryland > Montgomery County > Silver Spring (0.04)
- Europe > Portugal (0.04)
- Asia > Myanmar > Tanintharyi Region > Dawei (0.04)
- Asia > China (0.04)
- Research Report > New Finding (0.68)
- Research Report > Experimental Study (0.46)
- Health & Medicine > Therapeutic Area > Oncology (1.00)
- Health & Medicine > Diagnostic Medicine > Imaging (1.00)
- Health & Medicine > Health Care Technology (0.95)
- (3 more...)
- Information Technology > Sensing and Signal Processing > Image Processing (1.00)
- Information Technology > Artificial Intelligence > Vision (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning (0.69)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.68)
2025 digest of digests
Throughout the year we've reported on some of the larger stories, and some of the lesser-covered happenings, in our regular monthly digests. We look back through the archives and pick out one or two stories from each of our digests. This month, AI startup DeepSeek released DeepSeek R1, a reasoning model designed for good performance on logic, maths, and pattern-finding tasks. The company has also launched six smaller versions of R1 that are tiny enough to run locally on laptops. In Wired, Zeyi Yang reported on who is behind the startup, whilst Tongliang Liu (in The Conversation) looked at how DeepSeek achieved its results with a fraction of the cash and computing power of its competitors.
- South America > Brazil (0.06)
- North America > United States > Virginia (0.05)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.05)
- (3 more...)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (1.00)
- Information Technology > Artificial Intelligence > Issues > Social & Ethical Issues (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (0.91)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (0.91)
AIhub monthly digest: December 2025 – studying bias in AI-based recruitment tools, an image dataset for ethical AI benchmarking, and end of year compilations
Welcome to our monthly digest, where you can catch up with any AIhub stories you may have missed, peruse the latest news, recap recent events, and more. This month, we look into bias in AI-based recruitment tools, find out about a new image dataset for ethical AI benchmarking, dig into human-robot interactions and social robotics, and look back on another busy year in the world of AI. We've been meeting some of the PhD students that were selected to take part in the Doctoral Consortium at the European Conference on Artificial Intelligence (ECAI-2025) . In the second interview of the series, we caught up with Frida Hartman to find out how her PhD is going so far, and plans for the next steps in her investigations. Frida, along with co-authors Mario Mirabile and Michele Dusi, was also the winner of the ECAI-2025 Diversity & Inclusion Competition, for work entitled .
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.05)
- Europe > Netherlands > South Holland > Leiden (0.05)
- Europe > Netherlands > North Holland > Amsterdam (0.05)
The Point Where Reality Meets Fantasy: Mixed Adversarial Generators for Image Splice Detection
Modern photo editing tools allow creating realistic manipulated images easily. While fake images can be quickly generated, learning models for their detection is challenging due to the high variety of tampering artifacts and the lack of large labeled datasets of manipulated images. In this paper, we propose a new framework for training of discriminative segmentation model via an adversarial process. We simultaneously train four models: a generative retouching model G A that estimates the pixel-wise probability of image patch being either real or fake, and two discriminators D A that qualify the output of G A. The aim of model G A making a mistake. Our method extends the generative adversarial networks framework with two main contributions: (1) training of a generative model G A that learns rich scene semantics for manipulated region detection, (2) proposing per class semantic loss that facilitates semantically consistent image retouching by the G_R.
An Information-Theoretic Evaluation of Generative Models in Learning Multi-modal Distributions
The evaluation of generative models has received significant attention in the machine learning community. When applied to a multi-modal distribution which is common among image datasets, an intuitive evaluation criterion is the number of modes captured by the generative model. While several scores have been proposed to evaluate the quality and diversity of a model's generated data, the correspondence between existing scores and the number of modes in the distribution is unclear. In this work, we propose an information-theoretic diversity evaluation method for multi-modal underlying distributions. We utilize the R\'enyi Kernel Entropy (RKE) as an evaluation score based on quantum information theory to measure the number of modes in generated samples.
Exposing flaws of generative model evaluation metrics and their unfair treatment of diffusion models
We systematically study a wide variety of generative models spanning semantically-diverse image datasets to understand and improve the feature extractors and metrics used to evaluate them.Using best practices in psychophysics, we measure human perception of image realism for generated samples by conducting the largest experiment evaluating generative models to date, and find that no existing metric strongly correlates with human evaluations.Comparing to 17 modern metrics for evaluating the overall performance, fidelity, diversity, rarity, and memorization of generative models, we find that the state-of-the-art perceptual realism of diffusion models as judged by humans is not reflected in commonly reported metrics such as FID. This discrepancy is not explained by diversity in generated samples, though one cause is over-reliance on Inception-V3.We address these flaws through a study of alternative self-supervised feature extractors, find that the semantic information encoded by individual networks strongly depends on their training procedure, and show that DINOv2-ViT-L/14 allows for much richer evaluation of generative models. Next, we investigate data memorization, and find that generative models do memorize training examples on simple, smaller datasets like CIFAR10, but not necessarily on more complex datasets like ImageNet. However, our experiments show that current metrics do not properly detect memorization: none in the literature is able to separate memorization from other phenomena such as underfitting or mode shrinkage.
AAAI 2025 presidential panel on the future of AI research – video discussion on AGI
In March 2025, the Association for the Advancement of Artificial Intelligence (AAAI), published a report on the Future of AI Research . The report, which was led by outgoing AAAI President Francesca Rossi covers 17 different AI topics and aims to clearly identify the trajectory of AI research in a structured way. As part of this project, members of the report team are taking part in a series of video panel discussions covering selected chapters from the report. In the first panel, the AI experts tackled the considerations around artificial general intelligence (AGI) development. AIhub is dedicated to free high-quality information about AI.
- Oceania > Australia (0.06)
- North America > United States > California > Alameda County > Berkeley (0.06)
- North America > Canada (0.06)
- (3 more...)