AI Magazine is an official publication of the Association for the Advancement of Artificial Intelligence (AAAI). It is published four times each year in fall, winter, spring, and summer issues, and is sent to all members of the Association and subscribed to by most research libraries. Back issues are available on-line (issues less than 18 months old are only available to AAAI members). The purpose of AI Magazine is to disseminate timely and informative expository articles that represent the current state of the art in AI and to keep its readers posted on AAAI-related matters. The articles are selected for appeal to readers engaged in research and applications across the broad spectrum of AI.
Dialogue systems have become recently essential in our life. Their use is getting more and more fluid and easy throughout the time. This boils down to the improvements made in NLP and AI fields. In this paper, we try to provide an overview to the current state of the art of dialogue systems, their categories and the different approaches to build them. We end up with a discussion that compares all the techniques and analyzes the strengths and weaknesses of each. Finally, we present an opinion piece suggesting to orientate the research towards the standardization of dialogue systems building.
Here we present CaosDB, a Research Data Management System (RDMS) designed to ensure seamless integration of inhomogeneous data sources and repositories of legacy data. Its primary purpose is the management of data from biomedical sciences, both from simulations and experiments during the complete research data lifecycle. An RDMS for this domain faces particular challenges: Research data arise in huge amounts, from a wide variety of sources, and traverse a highly branched path of further processing. To be accepted by its users, an RDMS must be built around workflows of the scientists and practices and thus support changes in workflow and data structure. Nevertheless it should encourage and support the development and observation of standards and furthermore facilitate the automation of data acquisition and processing with specialized software. The storage data model of an RDMS must reflect these complexities with appropriate semantics and ontologies while offering simple methods for finding, retrieving, and understanding relevant data. We show how CaosDB responds to these challenges and give an overview of the CaosDB Server, its data model and its easy-to-learn CaosDB Query Language. We briefly discuss the status of the implementation, how we currently use CaosDB, and how we plan to use and extend it.
Vítků, Jaroslav, Dluhoš, Petr, Davidson, Joseph, Nikl, Matěj, Andersson, Simon, Paška, Přemysl, Šinkora, Jan, Hlubuček, Petr, Stránský, Martin, Hyben, Martin, Poliak, Martin, Feyereisl, Jan, Rosa, Marek
Research in Artificial Intelligence (AI) has focused mostly on two extremes: either on small improvements in narrow AI domains, or on universal theoretical frameworks which are usually uncomputable, incompatible with theories of biological intelligence, or lack practical implementations. The goal of this work is to combine the main advantages of the two: to follow a big picture view, while providing a particular theory and its implementation. In contrast with purely theoretical approaches, the resulting architecture should be usable in realistic settings, but also form the core of a framework containing all the basic mechanisms, into which it should be easier to integrate additional required functionality. In this paper, we present a novel, purposely simple, and interpretable hierarchical architecture which combines multiple different mechanisms into one system: unsupervised learning of a model of the world, learning the influence of one's own actions on the world, model-based reinforcement learning, hierarchical planning and plan execution, and symbolic/sub-symbolic integration in general. The learned model is stored in the form of hierarchical representations with the following properties: 1) they are increasingly more abstract, but can retain details when needed, and 2) they are easy to manipulate in their local and symbolic-like form, thus also allowing one to observe the learning process at each level of abstraction. On all levels of the system, the representation of the data can be interpreted in both a symbolic and a sub-symbolic manner. This enables the architecture to learn efficiently using sub-symbolic methods and to employ symbolic inference.
In this paper, we propose a constraint-based modeling approach for the problem of discovering frequent gradual patterns in a numerical dataset. This SAT-based declarative approach offers an additional possibility to benefit from the recent progress in satisfiability testing and to exploit the efficiency of modern SAT solvers for enumerating all frequent gradual patterns in a numerical dataset. Our approach can easily be extended with extra constraints, such as temporal constraints in order to extract more specific patterns in a broad range of gradual patterns mining applications. We show the practical feasibility of our SAT model by running experiments on two real world datasets.
I use simulation of two multilayer neural networks to gain intuition into the determinants of human learning. The first network, the teacher, is trained to achieve a high accuracy in handwritten digit recognition. The second network, the student, learns to reproduce the output of the first network. I show that learning from the teacher is more effective than learning from the data under the appropriate degree of regularization. Regularization allows the teacher to distinguish the trends and to deliver "big ideas" to the student. I also model other learning situations such as expert and novice teachers, high- and low-ability students and biased learning experience due to, e.g., poverty and trauma. The results from computer simulation accord remarkably well with finding of the modern psychological literature. The code is written in MATLAB and will be publicly available from the author's web page.
It's not a perfect measure, but unit sales of industrial robots give some idea of a country's industrial might. The names of the top five buyers in 2017 – China, Japan, South Korea, the US and Germany – shouldn't be too surprising. The global average is 74 per 10,000. One factor in this is the small electronics and automotive sectors here, which are two major drivers of industrial robot investment. The high number of SME and micro-businesses in Australian manufacturing is another.
I always loved products and technology. But ever since I was a child, I was especially fascinated by these big inventions, powered by transformative technological revolution that changed - everything! So I felt extremely lucky, when about 20 years ago, at the beginning of my career, I was just in time for one of these revolutions: when the Internet happened. Through the connected PC, the world we lived in has been transformed from a "physical world" -- where we used to go to places like libraries, and use things like encyclopedias and paper maps, to a "digital world" -- where we consume digital information and services from the convenience of our home. What was especially amazing, was the rate and scale of this transformation.
Here at The Next Platform, we've touched on the convergence of machine learning, HPC, and enterprise requirements looking at ways that vendors are trying to reduce the barriers to enable enterprises to leverage AI and machine learning to better address the rapid changes brought about by such emerging trends as the cloud, edge computing and mobility. At the SC17 show in November 2017, Dell EMC unveiled efforts underway to bring AI, machine learning and deep learning into the mainstream, similar to how the company and other vendors in recent years have been working to make it easier for enterprises to adopt HPC techniques for their environments. For Dell EMC, that means in part doing so through bundled, engineered systems. IBM has strategies underway, including through the integration of its PowerAI deep learning enterprise software with its Data Science Experience. Both offerings are aimed at making it easier for enterprises to embrace advance AI technologies and for developers and data scientists to develop and train machine learning models.
Silos have always been considered a bad thing for enterprise IT environments, and today's push for artificial intelligence and other cognitive technologies is no exception. A recent survey shows fewer than 50% of enterprises have deployed any of the "intelligent automation technologies" -- such as artificial intelligence (AI) and robotic process automation (RPA). IT leaders participating in the survey say data and applications within their companies are too siloed to make it work. That's the gist of a survey of 500 IT executives, conducted by IDG in partnership with Appian. The majority of executives, 86%, say they seek to achieve high levels of integration between human work, AI, and RPA over the coming year.