Goto

Collaborating Authors

 society


RFK Jr.'s Health Department Is Pondering a National Men's Health Initiative

WIRED

RFK Jr.'s Health Department Is Pondering a National Men's Health Initiative At an FDA discussion of testosterone replacement therapy, a top official called for special health centers to address a "men's health crisis." Others called to ease men's access to hormones. The US Department of Health and Human Services is considering launching a federal men's health initiative, a source at the agency tells WIRED. Brian Christine, who will be sworn in on December 12 as assistant secretary for health at HHS and head of the United States Public Health Service Commissioned Corps, called for such an effort Wednesday during a Food and Drug Administration panel on testosterone replacement therapy (TRT) for men. A spokesperson for HHS declined to comment.


Saved from the shredder, Alan Turing's papers sell for 627,000

Popular Science

Breakthroughs, discoveries, and DIY tips sent every weekday. A trove of forgotten papers penned by famed World War II codebreaker Alan Turing has sold for the record-setting price of 627,000. But the June 17 auction almost never happened. At one point, the long-lost archival materials from the father of modern computer science were nearly pulverized by a paper shredder. Alan Turing was many things during his brief and ultimately tragic life: renowned mathematician, computer theorist, marathon runner, philosopher, and an invaluable codebreaker.


'Meta has stolen books': authors to protest in London against AI trained using 'shadow library'

The Guardian

Novelists Kate Mosse and Tracy Chevalier as well as poet and former Royal Society of Literature chair Daljit Nagra will be among those in attendance outside the company's King's Cross office. Protesters will meet at Granary Square at 1.30pm and a letter to Meta from the Society of Authors (SoA) will be hand-delivered at 1.45pm. It will also be sent to Meta headquarters in the US. Earlier this year, a US court filing alleged that Meta CEO Mark Zuckerberg approved the company's use of a notorious "shadow library", LibGen, which contains more than 7.5 million books. Last month, the Atlantic republished a searchable database of the titles contained in LibGen, through which many authors discovered their works may have been used to train Meta's AI models.


British authors want Meta to answer for alleged copyright infringement

Engadget

A March 20 article in The Atlantic served as the letter's impetus. It reported that Meta had used LibGen, a pirated collection of over 7.5 million books, to train its AI models. Anyone on the internet over the last few weeks has likely seen videos of distraught authors learning that their work is available on the database (and potentially used by Meta without their permission). A lawsuit in the US alleges Meta CEO Mark Zuckerberg approved the use of LibGen's data to train its AI. The lawsuit's plaintiffs include writers Sarah Silverman and Ta-Nehisi Coates.


Making the unmodulated pyramid wavefront sensor smart II. First on-sky demonstration of extreme adaptive optics with deep learning

Landman, R., Haffert, S. Y., Long, J. D., Males, J. R., Close, L. M., Foster, W. B., Van Gorkom, K., Guyon, O., Hedglen, A. D., Johnson, P. T., Kautz, M. Y., Kueny, J. K., Li, J., Liberman, J., Lumbres, J., McEwen, E. A., McLeod, A., Schatz, L., Tonucci, E., Twitchell, K.

arXiv.org Artificial Intelligence

Pyramid wavefront sensors (PWFSs) are the preferred choice for current and future extreme adaptive optics (XAO) systems. Almost all instruments use the PWFS in its modulated form to mitigate its limited linearity range. However, this modulation comes at the cost of a reduction in sensitivity, a blindness to petal-piston modes, and a limit to the sensor's ability to operate at high speeds. Therefore, there is strong interest to use the PWFS without modulation, which can be enabled with nonlinear reconstructors. Here, we present the first on-sky demonstration of XAO with an unmodulated PWFS using a nonlinear reconstructor based on convolutional neural networks. We discuss the real-time implementation on the Magellan Adaptive Optics eXtreme (MagAO-X) instrument using the optimized TensorRT framework and show that inference is fast enough to run the control loop at >2 kHz frequencies. Our on-sky results demonstrate a successful closed-loop operation using a model calibrated with internal source data that delivers stable and robust correction under varying conditions. Performance analysis reveals that our smart PWFS achieves nearly the same Strehl ratio as the highly optimized modulated PWFS under favorable conditions on bright stars. Notably, we observe an improvement in performance on a fainter star under the influence of strong winds. These findings confirm the feasibility of using the PWFS in its unmodulated form and highlight its potential for next-generation instruments. Future efforts will focus on achieving even higher control loop frequencies (>3 kHz), optimizing the calibration procedures, and testing its performance on fainter stars, where more gain is expected for the unmodulated PWFS compared to its modulated counterpart.


Can A Society of Generative Agents Simulate Human Behavior and Inform Public Health Policy? A Case Study on Vaccine Hesitancy

Hou, Abe Bohan, Du, Hongru, Wang, Yichen, Zhang, Jingyu, Wang, Zixiao, Liang, Paul Pu, Khashabi, Daniel, Gardner, Lauren, He, Tianxing

arXiv.org Artificial Intelligence

Can we simulate a sandbox society with generative agents to model human behavior, thereby reducing the over-reliance on real human trials for assessing public policies? In this work, we investigate the feasibility of simulating health-related decision-making, using vaccine hesitancy, defined as the delay in acceptance or refusal of vaccines despite the availability of vaccination services (MacDonald, 2015), as a case study. To this end, we introduce the VacSim framework with 100 generative agents powered by Large Language Models (LLMs). VacSim simulates vaccine policy outcomes with the following steps: 1) instantiate a population of agents with demographics based on census data; 2) connect the agents via a social network and model vaccine attitudes as a function of social dynamics and disease-related information; 3) design and evaluate various public health interventions aimed at mitigating vaccine hesitancy. To align with real-world results, we also introduce simulation warmup and attitude modulation to adjust agents' attitudes. We propose a series of evaluations to assess the reliability of various LLM simulations. Experiments indicate that models like Llama and Qwen can simulate aspects of human behavior but also highlight real-world alignment challenges, such as inconsistent responses with demographic profiles. This early exploration of LLM-driven simulations is not meant to serve as definitive policy guidance; instead, it serves as a call for action to examine social simulation for policy development.


Interpretable Visualizations of Data Spaces for Classification Problems

Jorgensen, Christian, Lin, Arthur Y., Cersonsky, Rose K.

arXiv.org Machine Learning

How do classification models "see" our data? Based on their success in delineating behaviors, there must be some lens through which it is easy to see the boundary between classes; however, our current set of visualization techniques makes this prospect difficult. In this work, we propose a hybrid supervised-unsupervised technique distinctly suited to visualizing the decision boundaries determined by classification problems. This method provides a human-interpretable map that can be analyzed qualitatively and quantitatively, which we demonstrate through visualizing and interpreting a decision boundary for chemical neurotoxicity. While we discuss this method in the context of chemistry-driven problems, its application can be generalized across subfields for "unboxing" the operations of machine-learning classification models.


Responsible Artificial Intelligence Systems: A Roadmap to Society's Trust through Trustworthy AI, Auditability, Accountability, and Governance

Herrera-Poyatos, Andrés, Del Ser, Javier, de Prado, Marcos López, Wang, Fei-Yue, Herrera-Viedma, Enrique, Herrera, Francisco

arXiv.org Artificial Intelligence

Artificial intelligence (AI) has matured as a technology, necessitating the development of responsibility frameworks that are fair, inclusive, trustworthy, safe and secure, transparent, and accountable. By establishing such frameworks, we can harness the full potential of AI while mitigating its risks, particularly in high-risk scenarios. This requires the design of responsible AI systems based on trustworthy AI technologies and ethical principles, with the aim of ensuring auditability and accountability throughout their design, development, and deployment, adhering to domain-specific regulations and standards. This paper explores the concept of a responsible AI system from a holistic perspective, which encompasses four key dimensions: 1) regulatory context; 2) trustworthy AI technology along with standardization and assessments; 3) auditability and accountability; and 4) AI governance. The aim of this paper is double. First, we analyze and understand these four dimensions and their interconnections in the form of an analysis and overview. Second, the final goal of the paper is to propose a roadmap in the design of responsible AI systems, ensuring that they can gain society's trust. To achieve this trustworthiness, this paper also fosters interdisciplinary discussions on the ethical, legal, social, economic, and cultural aspects of AI from a global governance perspective. Last but not least, we also reflect on the current state and those aspects that need to be developed in the near future, as ten lessons learned.


The Right to AI

Mushkani, Rashid, Berard, Hugo, Cohen, Allison, Koeski, Shin

arXiv.org Artificial Intelligence

This paper proposes a Right to AI, which asserts that individuals and communities should meaningfully participate in the development and governance of the AI systems that shape their lives. Motivated by the increasing deployment of AI in critical domains and inspired by Henri Lefebvre's concept of the Right to the City, we reconceptualize AI as a societal infrastructure, rather than merely a product of expert design. In this paper, we critically evaluate how generative agents, large-scale data extraction, and diverse cultural values bring new complexities to AI oversight. The paper proposes that grassroots participatory methodologies can mitigate biased outcomes and enhance social responsiveness. It asserts that data is socially produced and should be managed and owned collectively. Drawing on Sherry Arnstein's Ladder of Citizen Participation and analyzing nine case studies, the paper develops a four-tier model for the Right to AI that situates the current paradigm and envisions an aspirational future. It proposes recommendations for inclusive data ownership, transparent design processes, and stakeholder-driven oversight. We also discuss market-led and state-centric alternatives and argue that participatory approaches offer a better balance between technical efficiency and democratic legitimacy.