emory
Don't Kill the Baby: The Case for AI in Arbitration
Since the introduction of Generative AI (GenAI) in 2022, its ability to simulate human intelligence and generate content has sparked both enthusiasm and concern. While much of the criticism focuses on AI's potential to perpetuate bias, create emotional dissonance, displace jobs, and raise ethical questions, these concerns often overlook the practical benefits of AI, particularly in legal contexts. This article examines the integration of AI into arbitration, arguing that the Federal Arbitration Act (FAA) allows parties to contractually choose AI-driven arbitration, despite traditional reservations. The article makes three key contributions: (1) It shifts the focus from debates over AI's personhood to the practical aspects of incorporating AI into arbitration, asserting that AI can effectively serve as an arbitrator if both parties agree; (2) It positions arbitration as an ideal starting point for broader AI adoption in the legal field, given its flexibility and the autonomy it grants parties to define their standards of fairness; and (3) It outlines future research directions, emphasizing the importance of empirically comparing AI and human arbitration, which could lead to the development of distinct systems. By advocating for the use of AI in arbitration, this article underscores the importance of respecting contractual autonomy and creating an environment that allows AI's potential to be fully realized. Drawing on the insights of Judge Richard Posner, the article argues that the ethical obligations of AI in arbitration should be understood within the context of its technological strengths and the voluntary nature of arbitration agreements. Ultimately, it calls for a balanced, open-minded approach to AI in arbitration, recognizing its potential to enhance the efficiency, fairness, and flexibility of dispute resolution.
- Europe > Italy (0.27)
- Europe > France (0.04)
- North America > United States > Texas (0.04)
- (19 more...)
- Research Report (1.00)
- Overview (1.00)
Investigating the dissemination of STEM content on social media with computational tools
Oshinowo, Oluwamayokun, Delgado, Priscila, Fay, Meredith, Luna, C. Alessandra, Dissanayaka, Anjana, Jeltuhin, Rebecca, Myers, David R.
These authors contributed equally to this work *Corresponding author. Abstract: Social media platforms can quickly disseminate STEM content to diverse audiences, but their operation can be mysterious. We used open-source machine learning methods such as clustering, regression, and sentiment analysis to analyze over 1000 videos and metrics thereof from 6 social media STEM creators. Our data provide insights into how audiences generate interest signals(likes, bookmarks, comments, shares), on the correlation of various signals with views, and suggest that content from newer creators is disseminated differently. We also share insights on how to optimize dissemination by analyzing data available exclusively to content creators as well as via sentiment analysis of comments. Introduction: Social media platforms such as Instagram, TikTok, and YouTube provide a new venue to promote STEM education, inspire the next generation of diverse scientists, and share knowledge to lower barriers to academia(1-3). Unlike many existing venues, social media is broadly accessible and not limited to those with significant resources devoted to their education. Content can be quickly disseminated to large diverse audiences of all ages and backgrounds(4).
- Education > Curriculum > Subject-Specific Education (0.48)
- Health & Medicine > Therapeutic Area (0.46)
Universities Are Making Ethics a Key Focus of Artificial Intelligence Research
These concerns have spread throughout the AI field, leading even large corporations such as Microsoft to develop internal guidelines for using this technology. In June, the company publicly shared its new "Responsible AI Standard" framework that is aimed at "keeping people and their goals at the center of system design decisions and respecting enduring values like fairness, reliability and safety, privacy and security, inclusiveness, transparency, and accountability," according to a Microsoft blog post. As a result of these standards, the company phased out an emotion recognition tool from its AI facial analysis services following criticism that such software was discriminatory against marginalized groups and not proven to be scientifically accurate. Businesses are not the only organizations looking to solve ethical questions about AI. Multiple colleges and universities are also creating research centers, educational programming, and other efforts that will help develop a new generation of scientists and engineers who are dedicated to using this form of technology to better society.
- Education (0.72)
- Information Technology (0.50)
- Health & Medicine (0.49)
A New Vision for A.I.
Anant Madabhushi was ready for the next step in his career as a researcher and educator. He was already widely recognized as a pioneer in the emerging field of machine learning--specifically for medical imaging and computer-assisted diagnoses. He had authored more than 450 peer-reviewed publications and held over one hundred patents in AI, radiomics, computational pathology, and computer vision. He had even seen his name printed in major consumer publications such as Business Insider and Scientific American that spread the word about how algorithms he's created have greatly improved the accuracy of diagnosing cancer. But Madabhushi, a professor of biomedical engineering at Case Western Reserve University, wanted more. He wanted to break out of the lab and share his specialized knowledge of AI with doctors and clinicians who could put it to use in health care systems and hospitals.
- Health & Medicine > Diagnostic Medicine (0.92)
- Health & Medicine > Health Care Technology (0.60)
- Health & Medicine > Therapeutic Area > Oncology (0.37)
Universal Online Learning with Unbounded Losses: Memory Is All You Need
Blanchard, Moise, Cosson, Romain, Hanneke, Steve
We resolve an open problem of Hanneke on the subject of universally consistent online learning with non-i.i.d. processes and unbounded losses. The notion of an optimistically universal learning rule was defined by Hanneke in an effort to study learning theory under minimal assumptions. A given learning rule is said to be optimistically universal if it achieves a low long-run average loss whenever the data generating process makes this goal achievable by some learning rule. Hanneke posed as an open problem whether, for every unbounded loss, the family of processes admitting universal learning are precisely those having a finite number of distinct values almost surely. In this paper, we completely resolve this problem, showing that this is indeed the case. As a consequence, this also offers a dramatically simpler formulation of an optimistically universal learning rule for any unbounded loss: namely, the simple memorization rule already suffices. Our proof relies on constructing random measurable partitions of the instance space and could be of independent interest for solving other open questions. We extend the results to the non-realizable setting thereby providing an optimistically universal Bayes consistent learning rule.
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- North America > United States > New York (0.04)
AI Can Predict Sepsis to Save Lives
This bot sees what doctors sometimes cannot. Emory University researchers have created a "Sepsis Expert" algorithm that works in real time to predict the onset of sepsis, the deadly condition that often takes hold in healthcare settings. Banking on information from 31,000 patients admitted to 2 hospitals and data on 52,000 intensive care unit (ICU) patients from a public database, the researchers used machine learning to build an artificial intelligence (AI) technology that they hope will save lives. But, until now, that knowledge has not translated to insights for the individual. "What we lack is'situation awareness,' which is perceiving data, comprehending data, and projecting those data into the future to see whether there is an evolving threat to the patient," says Timothy George Buchman, PhD, MD, director of Emory's critical care center and co-author of a study on the tech.
Aiming to Know You Better Perhaps Makes Me a More Engaging Dialogue Partner
There have been several attempts to define a plausible motivation for a chit-chat dialogue agent that can lead to engaging conversations. In this work, we explore a new direction where the agent specifically focuses on discovering information about its interlocutor. We formalize this approach by defining a quantitative metric. We propose an algorithm for the agent to maximize it. We validate the idea with human evaluation where our system outperforms various baselines. We demonstrate that the metric indeed correlates with the human judgments of engagingness.
- North America > United States > California > Los Angeles County > Los Angeles (0.28)
- North America > United States > California > San Francisco County > San Francisco (0.14)
- Europe > Russia (0.05)
- (14 more...)