ibid
Do Chatbots Walk the Talk of Responsible AI?
Aaronson, Susan Ariel, Moreno, Michael
Introduction In April 2025, sixteen - year - old Adam Raine committed suicide . Over the course of several months, the teen confided his suicidal thoughts to Open AI's ChatGPT chatbot . ChatGPT is not designed or developed to provide therapy, but it did not respond to Adam's prompts with suggestions that he obtain professional help . Moreover, w hen Adam expressed concern that his parents would blame themselves if he died, ChatGPT reportedly responded, "That doesn't mean you owe them survival," and offered to help draft his suicide note. Adam's death was not the only example of chatbot misbehavior. OpenAI claims it doesn't permit ChatGPT "to generate hateful, harassing, violent, or adult content." In July 2025, a reporter documented ChatGPT providing users with detailed instructions for self - mutilation, murder, and satanic rituals. O penAI has also acknowledged that individuals can misuse its systems. But the company has taken some responsibility.
- North America > Canada (0.15)
- North America > United States (0.14)
- Asia > Japan > Honshū > Chūgoku > Hiroshima Prefecture > Hiroshima (0.04)
- Law (1.00)
- Government (1.00)
- Information Technology > Security & Privacy (0.68)
- Health & Medicine > Therapeutic Area > Psychiatry/Psychology > Mental Health (0.68)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (0.41)
Introducing the A2AJ's Canadian Legal Data: An open-source alternative to CanLII for the era of computational law
The Access to Algorithmic Justice project (A2AJ) is an open-source alternative to the Canadian Legal Information Institute (CanLII). At a moment when technology promises to enable new ways of working with law, CanLII is becoming an impediment to the free access of law and access to justice movements because it restricts bulk and programmatic access to Canadian legal data. This means that Canada is staring down a digital divide: well-resourced actors have the best new technological tools and, because CanLII has disclaimed leadership, the public only gets second-rate tools. This article puts CanLII in its larger historical context and shows how long and deep efforts to democratize access to Canadian legal data are, and how often they are thwarted by private industry. We introduce the A2AJ's Canadian Legal Data project, which provides open access to over 116,000 court decisions and 5,000 statutes through multiple channels including APIs, machine learning datasets, and AI integration protocols. Through concrete examples, we demonstrate how open legal data enables courts to conduct evidence-based assessments and allows developers to create tools for practitioners serving low-income communities.
- North America > Canada > Quebec > Montreal (0.14)
- North America > Canada > Ontario (0.05)
- North America > United States > Arkansas (0.04)
- (6 more...)
Artificial intelligence and democracy: Towards digital authoritarianism or a democratic upgrade?
I) Introduction Do robots vote? Do machines make decisions instead of us? No, (at least not yet), but this is something that could happen . At the most important level, that of the electoral process, it is noted that it is not determined by the AI, but it is greatly impacted by its multiple applications . New types of online campaigns, driven by AI applications, are replacing traditional ones. The potential for manipulating voters and indirectly influencing the electoral outcome should not be underestimated. Certainly, instances of voter manipulation are not absent from traditional political campaigns, with the only difference being that digital manipulation is often carried out without our knowledge, e.g. by monitoring our behavior on social media. Nevertheless, we should not overlook the positive impact that AI has in the upgrading of democratic institutions by providing a forum for participation in decision - making . In this context, as a first step, we look into the potential jeopardization of democratic processes posed by the use of AI tools. Secondly, we consider the possibility of strengthening democratic processes by using AI, as well as the democratization of AI itself through the possibilities it offers. And thirdly, the impact of AI on the representative system is also discussed. The paper is concluded with recommendations and conclusions. II) Risks posed for democracy Misuse of AI tools can lead to the undermining of democratic political processes or the manipulation of individuals through specific targeting, which will destabilize democracy.
- Media (1.00)
- Law Enforcement & Public Safety > Crime Prevention & Enforcement (1.00)
- Law (1.00)
- (6 more...)
Manipulation and the AI Act: Large Language Model Chatbots and the Danger of Mirrors
Large Language Model chatbots are increasingly taking the form and visage of human beings, adapting human faces, names, voices, personalities, and quirks, including those of celebrities and well-known political figures. Personifying AI chatbots could foreseeably increase their trust with users. However, it could also make them more capable of manipulation, by creating the illusion of a close and intimate relationship with an artificial entity. The European Commission has finalized the AI Act, with the EU Parliament making amendments banning manipulative and deceptive AI systems that cause significant harm to users. Although the AI Act covers harms that accumulate over time, it is unlikely to prevent harms associated with prolonged discussions with AI chatbots. Specifically, a chatbot could reinforce a person's negative emotional state over weeks, months, or years through negative feedback loops, prolonged conversations, or harmful recommendations, contributing to a user's deteriorating mental health.
- North America > United States > New York > New York County > New York City (0.04)
- Europe > United Kingdom > England (0.04)
- Europe > Serbia > Central Serbia > Belgrade (0.04)
- (13 more...)
- Media (1.00)
- Law > Statutes (1.00)
- Information Technology > Security & Privacy (1.00)
- (5 more...)
Copyright in AI-generated works: Lessons from recent developments in patent law
Matulionyte, Rita, Lee, Jyh-An
In Thaler v The Comptroller-General of Patents, Designs and Trade Marks (DABUS), Smith J. held that an AI owner can possibly claim patent ownership over an AI-generated invention based on their ownership and control of the AI system. This AI-owner approach reveals a new option to allocate property rights over AI-generated output. While this judgment was primarily about inventorship and ownership of AI-generated invention in patent law, it has important implications for copyright law. After analysing the weaknesses of applying existing judicial approaches to copyright ownership of AI-generated works, this paper examines whether the AI-owner approach is a better option for determining copyright ownership of AI-generated works. The paper argues that while contracts can be used to work around the AI-owner approach in scenarios where users want to commercially exploit the outputs, this approach still provides more certainty and less transaction costs for relevant parties than other approaches proposed so far.
- North America > United States (1.00)
- Asia > China (0.68)
- Oceania > Australia (0.28)
- (2 more...)
AI Risk Skepticism, A Comprehensive Survey
Ambartsoumean, Vemir Michael, Yampolskiy, Roman V.
In this thorough study, we took a closer look at the skepticism that has arisen with respect to potential dangers associated with artificial intelligence - denoted as AI Risk Skepticism. Our study takes into account different points of view on the topic and draws parallels with other forms of skepticism that have shown up in science. We categorize the various skepticisms regarding the dangers of AI by the type of mistaken thinking involved. We hope this will be of interest and value to AI researchers concerned about the future of AI and the risks that it may pose. The issues of skepticism and risk in AI are decidedly important and require serious consideration. By addressing these issues with the rigor and precision of scientific research, we hope to better understand the objections we face and to find satisfactory ways to resolve them.
- North America > United States > New York (0.04)
- North America > United States > California > Santa Clara County > Palo Alto (0.04)
- Europe > Portugal > Braga > Braga (0.04)
- Research Report (1.00)
- Overview (1.00)
- Information Technology > Security & Privacy (1.00)
- Health & Medicine > Therapeutic Area (1.00)
- Government > Military (1.00)
- (5 more...)
- Europe (1.00)
- North America > United States (0.68)
Opportunities for Data Science Innovation in the Policing Sector
According to Peter K. Manning, in Anglo-American societies, the purpose of the police is to "sustain politically defined order and ordering via tracking, surveillance, coercion and arrest" (2014: p.6). Consisting of several authoritatively coordinated and legitimate organizations (ibid.), the policing sector serves governments in protecting their communities, preventing crime and disorder, and ensuring justice (The Policy Circle, 2022). The police's position as acting in the communities' interest suggests that their functions are heavily dependent on public trust and societal consensus concerning social justice and fairness (Manning, 2014). While there are large numbers of police officers employed in Australia (67,200 in 2021), a number which is expected to increase in the future (Australian Industry and Skills Committee, 2022), Ransley & Mazerolle (2009) have argued that trends in public governance and regulation have caused the increased pluralization and privatisation of policing efforts. Nowadays, the policing sector thus constitutes a large network of private, public and welfare organizations geared at controlling and preventing crimes (ibid.). In this essay, I will thus focus on data science opportunities for a variety of stakeholders involved in ensuring public security and order.
Areas of Strategic Visibility: Disability Bias in Biometrics
Mankoff, Jennifer, Kasnitz, Devva, Studies, Disability, Camp, L Jean, Lazar, Jonathan, Hochheiser, Harry
Yet many of these systems are not accessible to people who experience different kinds of disability exclusion. Different personal characteristics may impact any or all of the physical (DNA, fingerprints, face or retina) and behavioral (gesture, gait, voice) characteristics listed in the RFI as examples of biometric signals. We define disability here in terms of the discriminatory and often systemic problems with available infrastructure's ability to meet the needs of all people [UN 2017, Oliver, 2013). Using this definition, "[biometrics] could either mitigate or amplify disability depending on how they are designed." (Guo, 2019). As Whittaker and colleauges (2019) state, this is not simply a matter of algorithmic accuracy: "...discrimination against people of color, women, and other historically marginalized groups has often been justified by representing these groups as disabled . Thus disability is entwined with, and serves to justify, practices of marginalization." It is critical that we look beyond inclusion to full and fully accommodated participation.
- North America > United States > California (0.04)
- North America > United States > New York (0.04)
- North America > United States > Massachusetts > Middlesex County > Waltham (0.04)
- (2 more...)
- Law (1.00)
- Information Technology > Security & Privacy (1.00)
- Government > Regional Government > North America Government > United States Government (1.00)
- Health & Medicine > Therapeutic Area > Neurology (0.69)
Artificial Intelligence strategy in Finland
Finland is the first country having released its AI strategy in Europe already in March 2017. According to a study committed by Accenture and Frontier Economics, Finland ranked second that year, after the US, among the 11 developed countries in which economic growth potential is made possible by AI. According to Finland, this is because of the country's business structure (technologically intensive) and the public sector degree of digitalisation (see Finland, 2017, p. 12). The national strategy has been commissioned by the Government of Juha Sipilä to the Ministry of Economic Affairs and Employment, which in turn has nominated a steering group on AI to work on the national strategy. The AI Working Group has released the first draft of the strategy in 2017, though the work on the optimum public policies to be implemented is actually an on-going process, which has already been updated in 2019.
- Law (1.00)
- Information Technology > Security & Privacy (1.00)
- Government (1.00)
- Education > Educational Setting (0.94)