Goto

Collaborating Authors

 Government


A Step Towards Modeling and Destabilizing Human Trafficking Networks Using Machine Learning Methods

AAAI Conferences

Human trafficking is a multi-dimensional problem for which we have incomplete data, limited knowledge of the exploiters, and no understanding of the dynamics of the process. It is a problem that requires a larger, more complete database, understanding of key actors and their interactions in a dynamic environment. These methods exist in the areas of Data Mining, Machine Learning, Network Analysis, and Multi-agent systems. Using these methods, it is possible to create a model which is unique to detecting and preventing human trafficking. These methods can give applicable and successful solutions for different components of the problem of human trafficking. The goal is to build an intelligent system to enable collaboration and analysis, to identify and profile victims, traffickers, buyers, and exploiters, to predict human trafficking patterns, and to disrupt and destabilize human trafficking networks. In this paper, I will outline how some of these methods may be able to help analyze and model the dynamic phenomenon of human trafficking. The purpose is to see whether, using intelligent systems and appropriate collaboration and analysis tools, optimized intervention strategies can be created to profile victims and traffickers as well as impact, dissolve, and disrupt the human trafficking network in such a way that the network is unable to recover.


Speech Technology for Information Access: a South African Case Study

AAAI Conferences

Telephone-based information access has the potential to deliver a significant positive impact in the developing world. We discuss some of the most important issues that must be addressed in order to realize this potential, including matters related to resource development, automatic speech recognition, text-to-speech systems, and user-interface design. Although our main focus has been on the eleven official languages of South Africa, we believe that many of these same issues will be relevant for the application of speech technology throughout the developing world.


The Privacy Paradox

AAAI Conferences

The present privacy legislation continue to be drafted on the basis of the Strasburg Convention of 1981. The mere fact that present privacy laws are based on principles drafted 29 years ago, when the web did not exist, shows that privacy legislation need to make a quantum leap to be in line with the realities of to-day’s real life operating environment. If the status quo is kept, the law and its application shall face serious (and sometimes insurmountable) obstacles to its implementation, making compliance costly for private business, at the same time jeopardizing effectiveness of privacy protection for individuals. A new set of rules should be drafted and established, addressing the changed environment of information and communication technology, in order to allow free flow of information at the same time assuring due protection of personal data.


Ontological Semantics for Data Privacy Compliance: The NEURONA Project

AAAI Conferences

Some of the top legal ontologies developed so far include the Functional Ontology for Law [FOLaw] The increasing need for legal information and content (Valente 1995), the Frame-Based Ontology (van Kralingen management caused by the growing amount of 1995), the LRI-Core ontology (Breuker 2004), unstructured (or poorly structured) legal data managed by DOLCE CLO [Core Legal Ontology] (Gangemi et al. legal publishing companies, law firms and public 2003), or the Ontology of Fundamental Concepts (Rubino administrations, or the increasing amount of legal et al. 2006, Sartor 2006) the basis for the LKIF-Core information directly available on the World Wide Web, Ontology (Breuker et al. 2007). Nevertheless, most legal have created an urgent need to construct conceptual ontologies are domain specific ontologies, which represent structures for knowledge representation to share and particular legal domains towards search, indexing and manage intelligently all this information, whilst making reasoning in a specific domain of national or European law human-machine communication and understanding (e.g. the IPRONTO ontology by Delgado et al. 2003, the possible.


Preprocessing Legal Text: Policy Parsing and Isomorphic Intermediate Representation

AAAI Conferences

One of the most significant challenges in achieving digital privacy is incorporating privacy policy directly in computer systems. While rule systems have long existed, translating privacy laws, regulations, policies, and contracts into processor amenable forms is slow and difficult because the legal text is scattered, run-on, and unstructured, antithetical to the lean and logical forms of computer science. We are using and developing intermediate isomorphic forms as a Rosetta Stone-like tool to accelerate the translation process and in hopes of providing support to future domain-specific Natural Language Processing technology. This report describes our experience, thoughts about how to improve the form, and discoveries about the form and logic of the legal text that will affect the successful development of a rules tool to implement real-world complex privacy policies.


Supervised Topic Models

arXiv.org Machine Learning

We introduce supervised latent Dirichlet allocation (sLDA), a statistical model of labelled documents. The model accommodates a variety of response types. We derive an approximate maximum-likelihood procedure for parameter estimation, which relies on variational methods to handle intractable posterior expectations. Prediction problems motivate this research: we use the fitted model to predict response values for new documents. We test sLDA on two real-world problems: movie ratings predicted from reviews, and the political tone of amendments in the U.S. Senate based on the amendment text. We illustrate the benefits of sLDA versus modern regularized regression, as well as versus an unsupervised LDA analysis followed by a separate regression.


Genetic algorithm for robotic telescope scheduling

arXiv.org Artificial Intelligence

This work was inspired by author experiences with a telescope scheduling. Author long time goal is to develop and further extend software for an autonomous observatory. The software shall provide users with all the facilities they need to take scientific images of the night sky, cooperate with other autonomous observatories, and possibly more. This works shows how genetic algorithm can be used for scheduling of a single observatory, as well as network of observatories.


Web-Based Expert System for Civil Service Regulations: RCSES

arXiv.org Artificial Intelligence

Internet and expert systems have offered new ways of sharing and distributing knowledge, but there is a lack of researches in the area of web based expert systems. This paper introduces a development of a web-based expert system for the regulations of civil service in the Kingdom of Saudi Arabia named as RCSES. It is the first time to develop such system (application of civil service regulations) as well the development of it using web based approach. The proposed system considers 17 regulations of the civil service system. The different phases of developing the RCSES system are presented, as knowledge acquiring and selection, ontology and knowledge representations using XML format. XML Rule-based knowledge sources and the inference mechanisms were implemented using ASP.net technique. An interactive tool for entering the ontology and knowledge base, and the inferencing was built. It gives the ability to use, modify, update, and extend the existing knowledge base in an easy way. The knowledge was validated by experts in the domain of civil service regulations, and the proposed RCSES was tested, verified, and validated by different technical users and the developers staff. The RCSES system is compared with other related web based expert systems, that comparison proved the goodness, usability, and high performance of RCSES.


AI and HCI: Two Fields Divided by a Common Focus

AI Magazine

Although AI and HCI explore computing and intelligent behavior and the fields have seen some cross-over, until recently there was not very much. This article outlines a history of the fields that identifies some of the forces that kept the fields at arm’s length. AI was generally marked by a very ambitious, long-term vision requiring expensive systems, although the term was rarely envisioned as being as long as it proved to be, whereas HCI focused more on innovation and improvement of widely-used hardware within a short time-scale. These differences led to different priorities, methods, and assessment approaches.  A consequence was competition for resources, with HCI flourishing in AI winters and moving more slowly when AI was in favor. The situation today is much more promising, in part because of platform convergence: AI can be exploited on widely-used systems.


Robotics: Science and Systems IV

AI Magazine

Funding for the conference was provided by the National Science Foundation, the Naval Research Laboratory, ABB, Microsoft Research, Microsoft Robotics, Evolution Robotics, Willow Garage, and Intel. Springer sponsored the best student paper award. The meeting brought together more than 280 researchers from Europe, Asia, North America, and Australia. He showed how molecular motors exploit for the technical program. Twenty of the accepted thermal noise to achieve energy efficiency and papers were presented orally; the remaining 20 talked about the implications for building artificial were presented as posters.