Goto

Collaborating Authors

 long-term risk


Setting the AI Agenda -- Evidence from Sweden in the ChatGPT Era

Bruinsma, Bastiaan, Fredén, Annika, Hansson, Kajsa, Johansson, Moa, Kisić-Merino, Pasko, Saynova, Denitsa

arXiv.org Artificial Intelligence

This paper examines the development of the Artificial Intelligence (AI) meta-debate in Sweden before and after the release of ChatGPT. From the perspective of agenda-setting theory, we propose that it is an elite outside of party politics that is leading the debate -- i.e. that the politicians are relatively silent when it comes to this rapid development. We also suggest that the debate has become more substantive and risk-oriented in recent years. To investigate this claim, we draw on an original dataset of elite-level documents from the early 2010s to the present, using op-eds published in a number of leading Swedish newspapers. By conducting a qualitative content analysis of these materials, our preliminary findings lend support to the expectation that an academic, rather than a political elite is steering the debate.


AI Shows ExxonMobil Downplayed Its Role in Climate Change

WIRED

Between 1977 and 2014, 80 percent of ExxonMobil's internal research supported the idea that human activity was a contributor to climate change. But during that same period, 80 percent of the oil and gas provider's public statements instead expressed doubt whether climate change was caused by humans--or even real in the first place. To draw this conclusion, Harvard researchers Geoffrey Supran and Naomi Oreskes used machine learning to review more than 200 internal documents, peer-reviewed research, and public statements from Exxon Mobil. The newly released paper, "Rhetoric and frame analysis of ExxonMobil's climate change communications," exposes a decades-long pattern of public statements that sanitize the company's role in contributing to CO2 emissions. Oreskes and Supran used machine learning analysis to support two claims.



Clinical Risk Score for Predicting Recurrence Following a Cerebral Ischemic Event

#artificialintelligence

Introduction: Recurrent stroke has a higher rate of death and disability. A number of risk scores have been developed to predict short-term and long-term risk of stroke following an initial episode of stroke or transient ischemic attack (TIA) with limited clinical utilities. In this paper, we review different risk score models and discuss their validity and clinical utilities. Methods: The PubMed bibliographic database was searched for original research articles on the various risk scores for risk of stroke following an initial episode of stroke or TIA. The validation of the models was evaluated by examining the internal and external validation process as well as statistical methodology, the study power, as well as the accuracy and metrics such as sensitivity and specificity.


Sweetch's AI-driven app claims to reduce risk of developing diabetes

#artificialintelligence

While the world tries to decide whether artificial intelligence is here to help us or hurt us, AI is quietly infiltrating our daily lives -- from streaming recommendations to image recognition. And in health technology, AI is making a real difference to people across the world, saving lives in a multitude of ways. Today Sweetch -- a mobile health app that helps prevent diabetes and improve outcomes for people with diabetes by encouraging long-term behavioral change -- has revealed the outcomes of its clinical trial conducted at Johns Hopkins University. Directed by the university's division of endocrinology, diabetes, and metabolism, the study shows that using Sweetch significantly lowered A1C levels -- a diabetes biomarker for blood sugar. The app has been shown to increase physical activity and reduced weight for patients with early stage diabetes.


Experts are worried that advancements in AI could threaten humanity

#artificialintelligence

A Barbie doll that uses artificial intelligence to communicate interactively. Oren Etzioni, a well-known AI researcher, complains about news coverage of potential long-term risks arising from future success in AI research (see "No, Experts Don't Think Superintelligent AI is a Threat to Humanity"). After pointing the finger squarely at Oxford philosopher Nick Bostrom and his recent book, Superintelligence, Etzioni complains that Bostrom's "main source of data on the advent of human-level intelligence" consists of surveys on the opinions of AI researchers. He then surveys the opinions of AI researchers, arguing that his results refute Bostrom's. It's important to understand that Etzioni is not even addressing the reason Superintelligence has had the impact he decries: its clear explanation of why superintelligent AI may have arbitrarily negative consequences and why it's important to begin addressing the issue well in advance. Bostrom does not base his case on predictions that superhuman AI systems are imminent.