Goto

Collaborating Authors

 Hendler, James A.


Training Deep Neural Networks with Constrained Learning Parameters

arXiv.org Machine Learning

Today's deep learning models are primarily trained on CPUs and GPUs. Although these models tend to have low error, they consume high power and utilize large amount of memory owing to double precision floating point learning parameters. Beyond the Moore's law, a significant portion of deep learning tasks would run on edge computing systems, which will form an indispensable part of the entire computation fabric. Subsequently, training deep learning models for such systems will have to be tailored and adopted to generate models that have the following desirable characteristics: low error, low memory, and low power. We believe that deep neural networks (DNNs), where learning parameters are constrained to have a set of finite discrete values, running on neuromorphic computing systems would be instrumental for intelligent edge computing systems having these desirable characteristics. To this extent, we propose the Combinatorial Neural Network Training Algorithm (CoNNTrA), that leverages a coordinate gradient descent-based approach for training deep learning models with finite discrete learning parameters. Next, we elaborate on the theoretical underpinnings and evaluate the computational complexity of CoNNTrA. As a proof of concept, we use CoNNTrA to train deep learning models with ternary learning parameters on the MNIST, Iris and ImageNet data sets and compare their performance to the same models trained using Backpropagation. We use following performance metrics for the comparison: (i) Training error; (ii) Validation error; (iii) Memory usage; and (iv) Training time. Our results indicate that CoNNTrA models use 32x less memory and have errors at par with the Backpropagation models.


Knowledge Integration for Disease Characterization: A Breast Cancer Example

arXiv.org Artificial Intelligence

With the rapid advancements in cancer research, the information that is useful for characterizing disease, staging tumors, and creating treatment and survivorship plans has been changing at a pace that creates challenges when physicians try to remain current. One example involves increasing usage of biomarkers when characterizing the pathologic prognostic stage of a breast tumor. We present our semantic technology approach to support cancer characterization and demonstrate it in our end-to-end prototype system that collects the newest breast cancer staging criteria from authoritative oncology manuals to construct an ontology for breast cancer. Using a tool we developed that utilizes this ontology, physician-facing applications can be used to quickly stage a new patient to support identifying risks, treatment options, and monitoring plans based on authoritative and best practice guidelines. Physicians can also re-stage existing patients or patient populations, allowing them to find patients whose stage has changed in a given patient cohort. As new guidelines emerge, using our proposed mechanism, which is grounded by semantic technologies for ingesting new data from staging manuals, we have created an enriched cancer staging ontology that integrates relevant data from several sources with very little human intervention.


Feature-based reformulation of entities in triple pattern queries

arXiv.org Artificial Intelligence

Knowledge graphs encode uniquely identifiable entities to other entities or literal values by means of relationships, thus enabling semantically rich querying over the stored data. Typically, the semantics of such queries are often crisp thereby resulting in crisp answers. Query log statistics show that a majority of the queries issued to knowledge graphs are often entity centric queries. When a user needs additional answers the state-of-the-art in assisting users is to rewrite the original query resulting in a set of approximations. Several strategies have been proposed in past to address this. They typically move up the taxonomy to relax a specific element to a more generic element. Entities don't have a taxonomy and they end up being generalized. To address this issue, in this paper, we propose an entity centric reformulation strategy that utilizes schema information and entity features present in the graph to suggest rewrites. Once the features are identified, the entity in concern is reformulated as a set of features. Since entities can have a large number of features, we introduce strategies that select the top-k most relevant and {informative ranked features and augment them to the original query to create a valid reformulation. We then evaluate our approach by showing that our reformulation strategy produces results that are more informative when compared with state-of-the-art


Why the Data Train Needs Semantic Rails

AI Magazine

While catchphrases such as big data, smart data, data-intensive science, or smart dust highlight different aspects, they share a common theme: Namely, a shift towards a data-centric perspective in which the synthesis and analysis of data at an ever-increasing spatial, temporal, and thematic resolution promises new insights, while, at the same time, reducing the need for strong domain theories as starting points. In terms of the envisioned methodologies, those catchphrases tend to emphasize the role of predictive analytics, that is, statistical techniques including data mining and machine learning, as well as supercomputing. Interestingly, however, while this perspective takes the availability of data as a given, it does not answer the question how one would discover the required data in todayโ€™s chaotic information universe, how one would understand which datasets can be meaningfully integrated, and how to communicate the results to humans and machines alike. The semantic web addresses these questions. In the following, we argue why the data train needs semantic rails. We point out that making sense of data and gaining new insights works best if inductive and deductive techniques go hand-in-hand instead of competing over the prerogative of interpretation.


Semantics for Big Data

AI Magazine

We can easily understand linked data as being a part of the greater big data landscape, as many of the challenges are the same (Hitzler and Janowicz 2013). The linking component of linked data, however, puts an additional focus on the integration and conflation of data across multiple sources.


Reports on the 2013 AAAI Fall Symposium Series

AI Magazine

The Association for the Advancement of Artificial Intelligence was pleased to present the 2013 Fall Symposium Series, held Friday through Sunday, November 15โ€“17, at the Westin Arlington Gateway in Arlington, Virginia near Washington DC USA. The titles of the five symposia were as follows: Discovery Informatics: AI Takes a Science-Centered View on Big Data (FS-13-01); How Should Intelligence be Abstracted in AI Research: MDPs, Symbolic Representations, Artificial Neural Networks, or --? The highlights of each symposium are presented in this report.


Reports on the 2013 AAAI Fall Symposium Series

AI Magazine

Rinke Hoekstra (VU University from transferring and adapting semantic web Amsterdam) presented linked open data tools technologies to the big data quest. Finally, in the Social to discover connections within established scientific Networks and Social Contagion symposium, a data sets. Louiqa Rashid (University of Maryland) community of researchers explored topics such as social presented work on similarity metrics linking together contagion, game theory, network modeling, network-based drugs, genes, and diseases. Kyle Ambert (Intel) presented inference, human data elicitation, and Finna, a text-mining system to identify passages web analytics. Highlights of the symposia are contained of interest containing descriptions of neuronal in this report.


Knowledge Is Power: A View from the Semantic Web

AI Magazine

The emerging Semantic Web focuses on bringing knowledge representationlike capabilities to Web applications in a Web-friendly way. The ability to put knowledge on the Web, share it, and reuse it through standard Web mechanisms provides new and interesting challenges to artificial intelligence. In this paper, I explore the similarities and differences between the Semantic Web and traditional AI knowledge representation systems, and see if I can validate the analogy "The Semantic Web is to KR as the Web is to hypertext."


Knowledge Is Power: A View from the Semantic Web

AI Magazine

The emerging Semantic Web focuses on bringing knowledge representationlike capabilities to Web applications in a Web-friendly way. The ability to put knowledge on the Web, share it, and reuse it through standard Web mechanisms provides new and interesting challenges to artificial intelligence. In this paper, I explore the similarities and differences between the Semantic Web and traditional AI knowledge representation systems, and see if I can validate the analogy "The Semantic Web is to KR as the Web is to hypertext."