This paper proposes the first experimental architecture designed for the optimization of UNB networks. The proposed architecture enables context data collection, context model development, optimization and transmission control using rapid experimentation cycle approach enabled by flow based programming using Node-RED. Through preliminary results, we show the feasibility of PHY and MAC context data collection, point out challenges that are specific for UNB context modeling and discuss options for optimization. All datasets, context modeling and optimization tools used in the paper will be released as open source.
We propose two rule-based approaches for mapping text into predicate logic. This led us to develop a grammar induction approach for semantic parsing and ontology learning. The induced context-free grammar parses a sentence of text into a semantic tree, which is a meaning representation, where each node has its own semantic category, e.g. To evaluate the models, we propose a new metric -- the accuracy of the classifier trained on the generated dataset and tested on the original, manually constructed dataset.
Even though empirical research of computer-mediated communication (CMC) has a tradition of almost two decades, there are still only very few annotated CMC/social media corpora which are available to the scientific community and the public. One crucial issue is the unclear legal situation w.r.t. On the example of a legal expertise sought for the integration of an existing German chat corpus into CLARIN-D, the talk will highlight this issue (according to German law) and describe how it has been handled in the project. The creation of standards and the adaptation of NLP tools for that new type of language resource is a digital humanities topic par excellence since (1) it focuses on data which are born digital while at the same time (2) it requires a combination of expertise from humanities and computational sciences.
With the increasing volume and impact of communication on social media, social media analysis has become one of the most trending topics in natural language research, which can be observed in a growing number of workshops and conferences dedicated to this topic, projects funded, and research centers established. As a result, a number of social media resources containing chats, online commentaries, reviews, blogs, emails, forums, etc., as well as audio and video recordings, have been accumulated in the repositories of CLARIN centers. What is more, due to their distinct communicative characteristics, they pose new technical challenges for the standard natural language processing tools as well as new legal and ethical challenges for the dissemination of such resources, which has also been addressed by CLARIN, making the available infrastructure an important means for attracting new users to the CLARIN community.
The text analysis part of the AMiCA project (http://www.amicaproject.be), a cooperation between the University of Antwerp and the University of Ghent, developed methods and software to help moderators detect occurrences of unwanted or dangerous situations in their social networks. More specifically, the project developed prototype systems for the detection of cyberbullying, suicide announcements, and sexually transgressive behavior. In this talk I will focus on the text analysis methods that were used for normalization of social media text, for profiling users, and for detecting dangerous content. I will describe the architectures and results of the three resulting applications.
The aims of the CLARIN-PLUS workshop "Creation and Use of Social Media Resources" are: to demonstrate the possibilities of social media resources and natural language processing tools for researchers with a diverse research background who are interested in empirical research of language and social practices in computer-mediated communication; to promote interdisciplinary cooperation possibilities; to initiate a discussion on the various approaches to social media data collection and processing.
CLARIN makes digital language resources available to scholars, researchers, students and citizen-scientists from all disciplines, especially in the humanities and social sciences, through single sign-on access. CLARIN offers long-term solutions and technology services for deploying, connecting, analyzing and sustaining digital language data and tools. CLARIN supports scholars who want to engage in cutting edge data-driven research, contributing to a truly multilingual European Research Area.
Teaches basic aspects of complexity and complex systems, answering the question: What makes a system complex? Aspects that will be covered include nonlinearity, order disorder & chaos, emergence and complex adaptive systems Introduces methods, models and simulation tools to study the behaviours of complex systems and provide hands-on experience on through the use of software for building, simulating and visualizing complex networks. Teaches basic aspects of complexity and complex systems, answering the question: What makes a system complex? Introduces methods, models and simulation tools to study the behaviours of complex systems and provide hands-on experience on through the use of software for building, simulating and visualizing complex networks.
Founded in 1979, the Association for the Advancement of Artificial Intelligence (AAAI) (formerly the American Association for Artificial Intelligence) is a nonprofit scientific society devoted to advancing the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines. AAAI aims to promote research in, and responsible use of, artificial intelligence. AAAI also aims to increase public understanding of artificial intelligence, improve the teaching and training of AI practitioners, and provide guidance for research planners and funders concerning the importance and potential of current AI developments and future directions.