Goto

Collaborating Authors

 population


Best acronym? Best use of AI? We present our end-of-year awards

New Scientist

Feedback has spent some time sifting through 2025's key scientific achievements to come up with a range of weird and wonderful (and less wonderful) winners for our inaugural Backsies awards Being a New Scientist reader, you are probably savvy enough to realise that end-of-year roundups are written weeks ahead of time. This particular summation was drafted on 1 December, just as Feedback was preparing to spend 24 days avoiding hearing Wham's Last Christmas and trying to persuade Feedback Jr to make their mind up on what they want for their main present. Anything radically silly that may have happened after that date will have to wait until next year. Truly, 2025 has been rich in all the things Feedback is interested in. We learned about fascinating proposals like nuking the seabed to stop climate change, a notion that went straight into our Do Not Recommend pile.


Not All Features Deserve Attention: Graph-Guided Dependency Learning for Tabular Data Generation with Language Models

Zhang, Zheyu, Yang, Shuo, Prenkaj, Bardh, Kasneci, Gjergji

arXiv.org Artificial Intelligence

Large Language Models (LLMs) have shown strong potential for tabular data generation by modeling textualized feature-value pairs. However, tabular data inherently exhibits sparse feature-level dependencies, where many feature interactions are structurally insignificant. This creates a fundamental mismatch as LLMs' self-attention mechanism inevitably distributes focus across all pairs, diluting attention on critical relationships, particularly in datasets with complex dependencies or semantically ambiguous features. To address this limitation, we propose GraDe (Graph-Guided Dependency Learning), a novel method that explicitly integrates sparse dependency graphs into LLMs' attention mechanism. GraDe employs a lightweight dynamic graph learning module guided by externally extracted functional dependencies, prioritizing key feature interactions while suppressing irrelevant ones. Our experiments across diverse real-world datasets demonstrate that GraDe outperforms existing LLM-based approaches by up to 12% on complex datasets while achieving competitive results with state-of-the-art approaches in synthetic data quality. Our method is minimally intrusive yet effective, offering a practical solution for structure-aware tabular data modeling with LLMs.



Re-identification of Individuals in Genomic Datasets Using Public Face Images

Venkatesaramani, Rajagopal, Malin, Bradley A., Vorobeychik, Yevgeniy

arXiv.org Artificial Intelligence

DNA sequencing is becoming increasingly commonplace, both in medical and direct-to-consumer settings. To promote discovery, collected genomic data is often de-identified and shared, either in public repositories, such as OpenSNP, or with researchers through access-controlled repositories. However, recent studies have suggested that genomic data can be effectively matched to high-resolution three-dimensional face images, which raises a concern that the increasingly ubiquitous public face images can be linked to shared genomic data, thereby re-identifying individuals in the genomic data. While these investigations illustrate the possibility of such an attack, they assume that those performing the linkage have access to extremely well-curated data. Given that this is unlikely to be the case in practice, it calls into question the pragmatic nature of the attack. As such, we systematically study this re-identification risk from two perspectives: first, we investigate how successful such linkage attacks can be when real face images are used, and second, we consider how we can empower individuals to have better control over the associated re-identification risk. We observe that the true risk of re-identification is likely substantially smaller for most individuals than prior literature suggests. In addition, we demonstrate that the addition of a small amount of carefully crafted noise to images can enable a controlled trade-off between re-identification success and the quality of shared images, with risk typically significantly lowered even with noise that is imperceptible to humans.


How to Make Artificial Intelligence Less Biased « Machine Learning Times

#artificialintelligence

How could software designed to take the bias out of decision making, to be as objective as possible, produce these kinds of outcomes? After all, the purpose of artificial intelligence is to take millions of pieces of data and from them make predictions that are as error-free as possible. But as AI has become more pervasive--as companies and government agencies use AI to decide who gets loans, who needs more health care and how to deploy police officers, and more--investigators have discovered that focusing just on making the final predictions as error free as possible can mean that its errors aren't always distributed equally. Instead, its predictions can often reflect and exaggerate the effects of past discrimination and prejudice. In other words, the more AI focused on getting only the big picture right, the more it was prone to being less accurate when it came to certain segments of the population--in particular women and minorities. And the impact of this bias can be devastating on swaths of the population--for instance, denying loans to creditworthy women much more frequently than denying loans to creditworthy men.


How to Make Artificial Intelligence Less Biased

#artificialintelligence

As artificial intelligence spreads into more areas of public and private life, one thing has become abundantly clear: It can be just as biased as we are. AI systems have been shown to be less accurate at identifying the faces of dark-skinned women, to give women lower credit-card limits than their husbands, and to be more likely to incorrectly predict that Black defendants will commit future crimes than whites. Racial and gender bias has been found in job-search ads, software for predicting health risks and searches for images of CEOs. How could software designed to take the bias out of decision making, to be as objective as possible, produce these kinds of outcomes? After all, the purpose of artificial intelligence is to take millions of pieces of data and from them make predictions that are as error-free as possible.


'POPULATION: ONE' removes VR barriers, replacing them with thrills

Washington Post - Technology News

While the full "POP: ONE" package produces an sincerely enjoyable time (even the after-death screen is fun, enabling players to float around via an omnispective camera in 3D to watch the final fights unfold) there is room for improvement. Even after a tutorial and a few test runs in the game, it takes some time to find your footing. To that end, the all-too-familiar storm that shrinks the battlefield moves a tad too quick for my liking, which also limits the time I can get my bearings or just enjoy looking around the map and not running/fighting for my life. Related to this, I wish there was more control over walking speed. There is only one movement speed on the ground, with the game using its climbing/jumping/flying mechanics as a replacement for "sprinting."


On the Origin of Environments by Means of Natural Selection

AI Magazine

The field of adaptive robotics involves simulations and real-world implementations of robots that adapt to their environments. In this article, I introduce adaptive environmentics--the flip side of adaptive robotics--in which the environment adapts to the robot. The reasonable man adapts himself to the world; the unreasonable man persists to adapt the world to himself. Therefore, all progress depends on the unreasonable. The apparent complexity of its behavior over time is largely a reflection of the complexity of the environment in which it finds itself. Using both simulated and real robots, and applying techniques such as reinforcement learning, artificial neural networks, genetic algorithms, and fuzzy logic, researchers have obtained robots that display an amazing slew of behaviors and perform a multitude of tasks, including walking, pushing boxes, navigating, negotiating an obstacle course, playing ball, and foraging (Arkin 1998a). To cite one typical example of an ever-growing many, Yung and Ye (1999) recently wrote: We have presented a fuzzy navigator that performs well in complex and unknown environments, using a rule base that is learned from a simple corridor-like environment. The principle of the navigator is built on the fusion of the obstacle avoidance and goal seeking behaviors aided by an environment evaluator to tune the universe of discourse of the input sensor readings and enhance its adaptability. For this reason, the navigator has been able to learn extremely quickly in a simple environment, and then operate in an unknown environment, where exploration is not required at all. This quote typifies the underlying theme of adaptive robotics: Have a robot adapt to a given environment. Given signifies neither that the environment is known nor that it is static; it means that the robot must adapt to the quirks and idiosyncrasies imposed by the environment--which, for its part, does nothing at all to accommodate the puffing robot. This fundamental principle of adaptive robotics--the environment's unyielding nature--is repealed in this article. Dubbed adaptive environmentics, the basic idea is to create scenarios that are mirror images of those found in adaptive robotics: The environment adapts to a given robot. I hasten to say that in some cases, it is not possible to alter the environment, and in other cases, having the robot adapt is simply the underlying objective. Adaptive robotics has produced many interesting results based on these principles.


Qualitative Reasoning about Population and Community Ecology

AI Magazine

Traditional approaches to ecological modeling, based on mathematical equations, are hampered by the qualitative nature of ecological knowledge. In this article, we demonstrate that qualitative reasoning provides alternative and productive ways for ecologists to develop, organize, and implement models. We present a qualitative theory of population dynamics and use this theory to capture and simulate commonsense theories about population and community ecology. Advantages of this approach include the possibility of deriving relevant conclusions about ecological systems without numeric data; a compositional approach that enables the reusability of models representing partial behavior; the use of a rich vocabulary describing objects, situations, relations, and mechanisms of change; and the capability to provide causal interpretations of system behavior. A number of textbooks published recently (for example, Haefner [1996]; Jørgensen and Bendoricchio [2001]) show that ecological modeling is almost synonymous with mathematical model building.


Optimal Crop Selection Using Multiobjective Evolutionary Algorithms

AI Magazine

Soil characteristics are extremely important when determining yield potential. Fertilization and liming are commonly used to adapt soils to the nutritional requirements of the crops to be cultivated. Planting the crop that will best fit the soil characteristics is an interesting alternative to minimize the need for soil treatment, reducing costs and potential environmental damages. In addition, farmers usually look for investments that offer the greatest potential earnings with the least possible risks. Regarding the objectives to be considered, the crop-selection problem may be difficult to solve using traditional tools.