After some fine-tuning done to our storage repository we are back with exciting new content provided by the Association for the Advancement of Artificial Intelligence (AAAI), a nonprofit scientific society devoted to advancing the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines.
Humanoid robots walking across intermittent terrain, robotic arms grasping multifaceted objects, or UAVs darting left or right around a tree ... many of the dynamics and control problems we face today have both rich nonlinear dynamics and an inherently combinatorial structure. In this talk, Tedrake will review some recent work on planning and control methods which address these two challenges simultaneously.
The Thirty-First AAAI Conference on Artificial Intelligence (AAAI-17) was held February 4–9 in San Francisco, California. The purpose of the AAAI conference is to promote research in artificial intelligence (AI) and scientific exchange among AI researchers, practitioners, scientists, and engineers in affiliated disciplines.
Text is a basic material, a primary data layer, in many areas of humanities and social sciences. If we want to move forward with the agenda that the fields of digital humanities and computational social sciences are projecting, it is vital to bring together the technical areas that deal with automated text processing, and scholars in the humanities and social sciences. To foster new areas of research, it is necessary to not only understand what is out there in terms of proven technologies and infrastructures such as CLARIN, but also how the developers of text analytics can work with researchers in the humanities and social sciences to understand the challenges in each other's field better. What are the research questions of the researchers working on the texts?
The latter is, due to its importance, protected not only by national health legislations, yet also by Article 8 of the European Convention on Human Rights, Right to privacy. As already mentioned, medicine is a profession that requires a certain level of maintenance of secrecy of confidential information and according to the previous Court's decisions the secrecy is even more important in cases that involves psychiatric records. The robots' involvement in medical treatments on one hand and easy access to the information they gain during the treatment on the other, bring into question the effectiveness of the provisions of Article 8 of the European Convention on Human Rights. Current legislations in countries around the world do not put much attention on this particular area, even though the modern robotic approaches have already been introduced and also very well accepted.
This paper proposes the first experimental architecture designed for the optimization of UNB networks. The proposed architecture enables context data collection, context model development, optimization and transmission control using rapid experimentation cycle approach enabled by flow based programming using Node-RED. Through preliminary results, we show the feasibility of PHY and MAC context data collection, point out challenges that are specific for UNB context modeling and discuss options for optimization. All datasets, context modeling and optimization tools used in the paper will be released as open source.
We propose two rule-based approaches for mapping text into predicate logic. This led us to develop a grammar induction approach for semantic parsing and ontology learning. The induced context-free grammar parses a sentence of text into a semantic tree, which is a meaning representation, where each node has its own semantic category, e.g. To evaluate the models, we propose a new metric -- the accuracy of the classifier trained on the generated dataset and tested on the original, manually constructed dataset.
Even though empirical research of computer-mediated communication (CMC) has a tradition of almost two decades, there are still only very few annotated CMC/social media corpora which are available to the scientific community and the public. One crucial issue is the unclear legal situation w.r.t. On the example of a legal expertise sought for the integration of an existing German chat corpus into CLARIN-D, the talk will highlight this issue (according to German law) and describe how it has been handled in the project. The creation of standards and the adaptation of NLP tools for that new type of language resource is a digital humanities topic par excellence since (1) it focuses on data which are born digital while at the same time (2) it requires a combination of expertise from humanities and computational sciences.
With the increasing volume and impact of communication on social media, social media analysis has become one of the most trending topics in natural language research, which can be observed in a growing number of workshops and conferences dedicated to this topic, projects funded, and research centers established. As a result, a number of social media resources containing chats, online commentaries, reviews, blogs, emails, forums, etc., as well as audio and video recordings, have been accumulated in the repositories of CLARIN centers. What is more, due to their distinct communicative characteristics, they pose new technical challenges for the standard natural language processing tools as well as new legal and ethical challenges for the dissemination of such resources, which has also been addressed by CLARIN, making the available infrastructure an important means for attracting new users to the CLARIN community.