Goto

Collaborating Authors

Results


The Era Of Autonomous Army Bots is Here

#artificialintelligence

When the average person thinks about AI and robots what often comes to mind are post-apocalyptic visions of scary, super-intelligent machines taking over the world, or even the universe. The Terminator movie series is a good reflection of this fear of AI, with the core technology behind the intelligent machines powered by Skynet, referred to as an "artificial neural network-based conscious group mind and artificial general superintelligence system". However, the AI of today looks nothing like the worrisome science fiction representation. Rather, AI is performing many tedious and manual tasks and providing value from recognition and conversation systems to predictive analytics pattern matching and autonomous systems. In that context, the fact that governments and military organizations are investing heavily in AI shouldn't be as much concerning as it is intriguing. The ways that machine learning and AI are being implemented are both mundane from the perspective of enabling humans to do their existing tasks better, and very interesting seeing how machines are being made more intelligent to give humans better understanding and control of the environment around them.


Learning what they think vs. learning what they do: The micro-foundations of vicarious learning

arXiv.org Artificial Intelligence

Vicarious learning is a vital component of organizational learning. We theorize and model two fundamental processes underlying vicarious learning: observation of actions (learning what they do) vs. belief sharing (learning what they think). The analysis of our model points to three key insights. First, vicarious learning through either process is beneficial even when no agent in a system of vicarious learners begins with a knowledge advantage. Second, vicarious learning through belief sharing is not universally better than mutual observation of actions and outcomes. Specifically, enabling mutual observability of actions and outcomes is superior to sharing of beliefs when the task environment features few alternatives with large differences in their value and there are no time pressures. Third, symmetry in vicarious learning in fact adversely affects belief sharing but improves observational learning. All three results are shown to be the consequence of how vicarious learning affects self-confirming biased beliefs.


GPT-3 Creative Fiction

#artificialintelligence

What if I told a story here, how would that story start?" Thus, the summarization prompt: "My second grader asked me what this passage means: …" When a given prompt isn't working and GPT-3 keeps pivoting into other modes of completion, that may mean that one hasn't constrained it enough by imitating a correct output, and one needs to go further; writing the first few words or sentence of the target output may be necessary.


Ethical AI and the importance of guidelines for algorithms -- explained

#artificialintelligence

In October, Amazon had to discontinue an artificial intelligence–powered recruiting tool after it discovered the system was biased against female applicants. In 2016, a ProPublica investigation revealed a recidivism assessment tool that used machine learning was biased against black defendants. More recently, the US Department of Housing and Urban Development sued Facebook because its ad-serving algorithms enabled advertisers to discriminate based on characteristics like gender and race. And Google refrained from renewing its AI contract with the Department of Defense after employees raised ethical concerns. Those are just a few of the many ethical controversies surrounding artificial intelligence algorithms in the past few years.


Artificial intelligence in space

arXiv.org Artificial Intelligence

In the next coming years, space activities are expected to undergo a radical transformation with the emergence of new satellite systems or new services which will incorporate the contributions of artificial intelligence and machine learning defined as covering a wide range of innovations from autonomous objects with their own decision-making power to increasingly sophisticated services exploiting very large volumes of information from space. This chapter identifies some of the legal and ethical challenges linked to its use. These legal and ethical challenges call for solutions which the international treaties in force are not sufficient to determine and implement. For this reason, a legal methodology must be developed that makes it possible to link intelligent systems and services to a system of rules applicable thereto. It discusses existing legal AI-based tools amenable for making space law actionable, interoperable and machine readable for future compliance tools.


Ethical AI and the importance of guidelines for algorithms -- explained – Ranzware Tech NEWS

#artificialintelligence

In October, Amazon had to discontinue an artificial intelligence–powered recruiting tool after it discovered the system was biased against female applicants. In 2016, a ProPublica investigation revealed a recidivism assessment tool that used machine learning was biased against black defendants. More recently, the US Department of Housing and Urban Development sued Facebook because its ad-serving algorithms enabled advertisers to discriminate based on characteristics like gender and race. And Google refrained from renewing its AI contract with the Department of Defense after employees raised ethical concerns. Those are just a few of the many ethical controversies surrounding artificial intelligence algorithms in the past few years.


Explainable Artificial Intelligence: a Systematic Review

arXiv.org Artificial Intelligence

This has led to the development of a plethora of domain-dependent and context-specific methods for dealing with the interpretation of machine learning (ML) models and the formation of explanations for humans. Unfortunately, this trend is far from being over, with an abundance of knowledge in the field which is scattered and needs organisation. The goal of this article is to systematically review research works in the field of XAI and to try to define some boundaries in the field. From several hundreds of research articles focused on the concept of explainability, about 350 have been considered for review by using the following search methodology. In a first phase, Google Scholar was queried to find papers related to "explainable artificial intelligence", "explainable machine learning" and "interpretable machine learning". Subsequently, the bibliographic section of these articles was thoroughly examined to retrieve further relevant scientific studies. The first noticeable thing, as shown in figure 2 (a), is the distribution of the publication dates of selected research articles: sporadic in the 70s and 80s, receiving preliminary attention in the 90s, showing raising interest in 2000 and becoming a recognised body of knowledge after 2010. The first research concerned the development of an explanation-based system and its integration in a computer program designed to help doctors make diagnoses [3]. Some of the more recent papers focus on work devoted to the clustering of methods for explainability, motivating the need for organising the XAI literature [4, 5, 6].


AI Research Considerations for Human Existential Safety (ARCHES)

arXiv.org Artificial Intelligence

Framed in positive terms, this report examines how technical AI research might be steered in a manner that is more attentive to humanity's long-term prospects for survival as a species. In negative terms, we ask what existential risks humanity might face from AI development in the next century, and by what principles contemporary technical research might be directed to address those risks. A key property of hypothetical AI technologies is introduced, called \emph{prepotence}, which is useful for delineating a variety of potential existential risks from artificial intelligence, even as AI paradigms might shift. A set of \auxref{dirtot} contemporary research \directions are then examined for their potential benefit to existential safety. Each research direction is explained with a scenario-driven motivation, and examples of existing work from which to build. The research directions present their own risks and benefits to society that could occur at various scales of impact, and in particular are not guaranteed to benefit existential safety if major developments in them are deployed without adequate forethought and oversight. As such, each direction is accompanied by a consideration of potentially negative side effects.


Imposing Regulation on Advanced Algorithms

arXiv.org Artificial Intelligence

This book discusses the necessity and perhaps urgency for the regulation of algorithms on which new technologies rely; technologies that have the potential to re-shape human societies. From commerce and farming to medical care and education, it is difficult to find any aspect of our lives that will not be affected by these emerging technologies. At the same time, artificial intelligence, deep learning, machine learning, cognitive computing, blockchain, virtual reality and augmented reality, belong to the fields most likely to affect law and, in particular, administrative law. The book examines universally applicable patterns in administrative decisions and judicial rulings. First, similarities and divergence in behavior among the different cases are identified by analyzing parameters ranging from geographical location and administrative decisions to judicial reasoning and legal basis. As it turns out, in several of the cases presented, sources of general law, such as competition or labor law, are invoked as a legal basis, due to the lack of current specialized legislation. This book also investigates the role and significance of national and indeed supranational regulatory bodies for advanced algorithms and considers ENISA, an EU agency that focuses on network and information security, as an interesting candidate for a European regulator of advanced algorithms. Lastly, it discusses the involvement of representative institutions in algorithmic regulation.


Why Fairness Cannot Be Automated: Bridging the Gap Between EU Non-Discrimination Law and AI

arXiv.org Artificial Intelligence

This article identifies a critical incompatibility between European notions of discrimination and existing statistical measures of fairness. First, we review the evidential requirements to bring a claim under EU non-discrimination law. Due to the disparate nature of algorithmic and human discrimination, the EU's current requirements are too contextual, reliant on intuition, and open to judicial interpretation to be automated. Second, we show how the legal protection offered by non-discrimination law is challenged when AI, not humans, discriminate. Humans discriminate due to negative attitudes (e.g. stereotypes, prejudice) and unintentional biases (e.g. organisational practices or internalised stereotypes) which can act as a signal to victims that discrimination has occurred. Finally, we examine how existing work on fairness in machine learning lines up with procedures for assessing cases under EU non-discrimination law. We propose "conditional demographic disparity" (CDD) as a standard baseline statistical measurement that aligns with the European Court of Justice's "gold standard." Establishing a standard set of statistical evidence for automated discrimination cases can help ensure consistent procedures for assessment, but not judicial interpretation, of cases involving AI and automated systems. Through this proposal for procedural regularity in the identification and assessment of automated discrimination, we clarify how to build considerations of fairness into automated systems as far as possible while still respecting and enabling the contextual approach to judicial interpretation practiced under EU non-discrimination law. N.B. Abridged abstract