Goto

Collaborating Authors

Unpredictability of AI

arXiv.org Artificial Intelligence

With increase in capabilities of artificial intelligence, over the last decade, a significant number of researchers have realized importance in creating not only capable intelligent systems, but also making them safe and secure [1-6]. Unfortunately, the field of AI Safety is very young, and researchers are still working to identify its main challenges and limitations. Impossibility results are well known in many fields of inquiry [7-13], and some have now been identified in AI Safety [14-16]. In this paper, we concentrate on a poorly understood concept of unpredictability of intelligent systems [17], which limits our ability to understand impact of intelligent systems we are developing and is a challenge for software verification and intelligent system control, as well as AI Safety in general. In theoretical computer science and in software development in general, many well-known impossibility results are well established, some of them are strongly related to the subject of this paper, for example: Rice's Theorem states that no computationally effective method can decide if a program will exhibit a particular nontrivial behavior, such as producing a specific output [18].


EDC Las Vegas and the increasing impossibility of escapism

Los Angeles Times

But the fact that we have to think about this at all is a slow-motion crime against joy. Even as the fireworks exploded over the Kinetic Field, even as couples got married in the EDC chapel, even as the leather-fetish dancers licked each other in the Neon Garden, I couldn't stop thinking about the Santa Fe, Texas, teen who survived her school's mass shooting and said she always imagined her school would be a target. The only question was when.


Clustering Redemption–Beyond the Impossibility of Kleinberg's Axioms

Neural Information Processing Systems

Kleinberg (2002) stated three axioms that any clustering procedure should satisfy and showed there is no clustering procedure that simultaneously satisfies all three. One of these, called the consistency axiom, requires that when the data is modified in a helpful way, i.e. if points in the same cluster are made more similar and those in different ones made less similar, the algorithm should output the same clustering. To circumvent this impossibility result, research has focused on considering clustering procedures that have a clustering quality measure (or a cost) and showing that a modification of Kleinberg's axioms that takes cost into account lead to feasible clustering procedures. In this work, we take a different approach, based on the observation that the consistency axiom fails to be satisfied when the "correct" number of clusters changes. We modify this axiom by making use of cost functions to determine the correct number of clusters, and require that consistency holds only if the number of clusters remains unchanged.


Impossibility and Uncertainty Theorems in AI Value Alignment (or why your AGI should not have a utility function)

arXiv.org Artificial Intelligence

Utility functions or their equivalents (value functions, objective functions, loss functions, reward functions, preference orderings) are a central tool in most current machine learning systems. These mechanisms for defining goals and guiding optimization run into practical and conceptual difficulty when there are independent, multi-dimensional objectives that need to be pursued simultaneously and cannot be reduced to each other. Ethicists have proved several impossibility theorems that stem from this origin; those results appear to show that there is no way of formally specifying what it means for an outcome to be good for a population without violating strong human ethical intuitions (in such cases, the objective function is a social welfare function). We argue that this is a practical problem for any machine learning system (such as medical decision support systems or autonomous weapons) or rigidly rule-based bureaucracy that will make high stakes decisions about human lives: such systems should not use objective functions in the strict mathematical sense. We explore the alternative of using uncertain objectives, represented for instance as partially ordered preferences, or as probability distributions over total orders. We show that previously known impossibility theorems can be transformed into uncertainty theorems in both of those settings, and prove lower bounds on how much uncertainty is implied by the impossibility results. We close by proposing two conjectures about the relationship between uncertainty in objectives and severe unintended consequences from AI systems.


Data is not facts - the impossibility of being unbiased

@machinelearnbot

We talk a lot about making decisions based on data but we need to be careful about how hard and fast those decisions are. Our decisions are only as good as our data and our analysis. Data is always a sample of the full scope of reality and analytics is always an interpretation of that sample. We need to be cognizant of the differences between Opinions, Facts and Conclusions. And, just as important, we need to recognize the relationship between our judgement and our ego: all disagreements are personal to some degree.