Goto

Collaborating Authors

Scientific Discovery: Overviews


Selective Probabilistic Classifier Based on Hypothesis Testing

arXiv.org Artificial Intelligence

In this paper, we propose a simple yet effective method to deal with the violation of the Closed-World Assumption for a classifier. Previous works tend to apply a threshold either on the classification scores or the loss function to reject the inputs that violate the assumption. However, these methods cannot achieve the low False Positive Ratio (FPR) required in safety applications. The proposed method is a rejection option based on hypothesis testing with probabilistic networks. With probabilistic networks, it is possible to estimate the distribution of outcomes instead of a single output. By utilizing Z-test over the mean and standard deviation for each class, the proposed method can estimate the statistical significance of the network certainty and reject uncertain outputs. The proposed method was experimented on with different configurations of the COCO and CIFAR datasets. The performance of the proposed method is compared with the Softmax Response, which is a known top-performing method. It is shown that the proposed method can achieve a broader range of operation and cover a lower FPR than the alternative.


Translational NLP: A New Paradigm and General Principles for Natural Language Processing Research

arXiv.org Artificial Intelligence

Natural language processing (NLP) research combines the study of universal principles, through basic science, with applied science targeting specific use cases and settings. However, the process of exchange between basic NLP and applications is often assumed to emerge naturally, resulting in many innovations going unapplied and many important questions left unstudied. We describe a new paradigm of Translational NLP, which aims to structure and facilitate the processes by which basic and applied NLP research inform one another. Translational NLP thus presents a third research paradigm, focused on understanding the challenges posed by application needs and how these challenges can drive innovation in basic science and technology design. We show that many significant advances in NLP research have emerged from the intersection of basic principles with application needs, and present a conceptual framework outlining the stakeholders and key questions in translational research. Our framework provides a roadmap for developing Translational NLP as a dedicated research area, and identifies general translational principles to facilitate exchange between basic and applied research.


Toward Building Science Discovery Machines

arXiv.org Artificial Intelligence

The dream of building machines that can do science has inspired scientists for decades. Remarkable advances have been made recently; however, we are still far from achieving this goal. In this paper, we focus on the scientific discovery process where a high level of reasoning and remarkable problem-solving ability are required. We review different machine learning techniques used in scientific discovery with their limitations. We survey and discuss the main principles driving the scientific discovery process. These principles are used in different fields and by different scientists to solve problems and discover new knowledge. We provide many examples of the use of these principles in different fields such as physics, mathematics, and biology. We also review AI systems that attempt to implement some of these principles. We argue that building science discovery machines should be guided by these principles as an alternative to the dominant approach of current AI systems that focuses on narrow objectives. Building machines that fully incorporate these principles in an automated way might open the doors for many advancements.


Hypothesis Testing for High-Dimensional Multinomials: A Selective Review

arXiv.org Machine Learning

The statistical analysis of discrete data has been the subject of extensive statistical research dating back to the work of Pearson. In this survey we review some recently developed methods for testing hypotheses about high-dimensional multinomials. Traditional tests like the $\chi^2$ test and the likelihood ratio test can have poor power in the high-dimensional setting. Much of the research in this area has focused on finding tests with asymptotically Normal limits and developing (stringent) conditions under which tests have Normal limits. We argue that this perspective suffers from a significant deficiency: it can exclude many high-dimensional cases when - despite having non Normal null distributions - carefully designed tests can have high power. Finally, we illustrate that taking a minimax perspective and considering refinements of this perspective can lead naturally to powerful and practical tests.


Modelling serendipity in a computational context

arXiv.org Artificial Intelligence

Building on a survey of previous theories of serendipity and creativity, we advance a sequential model of serendipitous occurrences. We distinguish between serendipity as a service and serendipity in the system itself, clarify the role of invention and discovery, and provide a measure for the serendipity potential of a system. While a system can arguably not be guaranteed to be serendipitous, it can have a high potential for serendipity. Practitioners can use these theoretical tools to evaluate a computational system's potential for unexpected behaviour that may have a beneficial outcome. In addition to a qualitative features of serendipity potential, the model also includes quantitative ratings that can guide development work. We show how the model is used in three case studies of existing and hypothetical systems, in the context of evolutionary computing, automated programming, and (next-generation) recommender systems. From this analysis, we extract recommendations for practitioners working with computational serendipity, and outline future directions for research.


2at1RLo

#artificialintelligence

From the era of the desktop app to the era of the web page to the era of the mobile app to the latest paradigm shift which seems to be happening now: the conversation. These providers will most likely sit at the center of an ecosystem which will handle NLP (Natural Language Processing), semantic analysis, and other core tasks such as location and calendar integration. Currently, there are "bits and pieces" for particulars like dialogs (IBM Dialog) and NLP (IBM AlchemyAPI) all the way to large sdk's for voice and digital assistants (Alexa, Siri, and Google). While the examples above are simplistic they do provide some structure and a view into the basic text lines of voice and chat applications.


Notes on a New Philosophy of Empirical Science

arXiv.org Machine Learning

This book presents a methodology and philosophy of empirical science based on large scale lossless data compression. In this view a theory is scientific if it can be used to build a data compression program, and it is valuable if it can compress a standard benchmark database to a small size, taking into account the length of the compressor itself. This methodology therefore includes an Occam principle as well as a solution to the problem of demarcation. Because of the fundamental difficulty of lossless compression, this type of research must be empirical in nature: compression can only be achieved by discovering and characterizing empirical regularities in the data. Because of this, the philosophy provides a way to reformulate fields such as computer vision and computational linguistics as empirical sciences: the former by attempting to compress databases of natural images, the latter by attempting to compress large text databases. The book argues that the rigor and objectivity of the compression principle should set the stage for systematic progress in these fields. The argument is especially strong in the context of computer vision, which is plagued by chronic problems of evaluation. The book also considers the field of machine learning. Here the traditional approach requires that the models proposed to solve learning problems be extremely simple, in order to avoid overfitting. However, the world may contain intrinsically complex phenomena, which would require complex models to understand. The compression philosophy can justify complex models because of the large quantity of data being modeled (if the target database is 100 Gb, it is easy to justify a 10 Mb model). The complex models and abstractions learned on the basis of the raw data (images, language, etc) can then be reused to solve any specific learning problem, such as face recognition or machine translation.


Creativity at the Metalevel: AAAI-2000 Presidential Address

AI Magazine

Creativity is sometimes taken to be an inexplicable aspect of human activity. By summarizing a considerable body of literature on creativity, I hope to show how to turn some of the best ideas about creativity into programs that are demonstrably more creative than any we have seen to date. I believe the key to building more creative programs is to give them the ability to reflect on and modify their own frameworks and criteria. That is, I believe that the key to creativity is at the metalevel.


The 1995 AAAI Spring Symposia Reports

AI Magazine

The Association for the Advancement of Artificial Intelligence held its 1995 Spring Symposium Series on March 27 to 29 at Stanford University. This article contains summaries of the nine symposia that were conducted: (1) Empirical Methods in Discourse Interpretation and Generation; (2) Extending Theories of Action: Formal Theory and Practical Applications; (3) Information Gathering from Heterogeneous, Distributed Environments; (4) Integrated Planning Applications; (5) Interactive Story Systems: Plot and Character; (6) Lessons Learned from Implemented Software Architectures for Physical Agents; (7) Representation and Acquisition of Lexical Knowledge: Polysemy, Ambiguity, and Generativity; (8) Representing Mental States and Mechanisms; and (9) Systematic Methods of Scientific Discovery.


The Computer Revolution in Philosophy

Classics

"Computing can change our ways of thinking about many things, mathematics, biology, engineering, administrative procedures, and many more. But my main concern is that it can change our thinking about ourselves: giving us new models, metaphors, and other thinking tools to aid our efforts to fathom the mysteries of the human mind and heart. The new discipline of Artificial Intelligence is the branch of computing most directly concerned with this revolution. By giving us new, deeper, insights into some of our inner processes, it changes our thinking about ourselves. It therefore changes some of our inner processes, and so changes what we are, like all social, technological and intellectual revolutions. "This book, published in 1978 by Harvester Press and Humanities Press, has been out of print for many years, and is now online, produced from a scanned in copy of the original, digitised by OCR software and made available in September 2001. Since then a number of notes and corrections have been added. Atlantic Highlands, NJ: Humanities Press