Goto

Collaborating Authors

Case-Based Reasoning


Careers at Drexel - Human Resources

#artificialintelligence

Drexel is one of Philadelphia's top 10 private employers, a comprehensive global research university and a major engine for economic development in the region. With over 24,000 students, Drexel is one of America's 15 largest private universities. Drexel has committed to being the nation's most civically engaged university, with community partnerships integrated into every aspect of service and academics. A Postdoctoral position is available in the TeX-Base Lab of Dr. Weber at the College of Computing and Informatics at Drexel University. The successful candidate will conduct fundamental and applied research in artificial intelligence (AI) agents using natural language understanding models, explainable AI, and case-based reasoning.


Global Big Data Conference

#artificialintelligence

With over 2.5 billion consumer accounts, Mastercard connects nearly every financial institution in the world and generates almost 75 billion transactions a year. As a result, the company has built over decades a data warehouse that holds "one of the best datasets about commerce really anywhere in the world," says Ed McLaughlin, president of operations and technology at Mastercard. And the company is putting that data to good use. The fastest growing part of Mastercard's business today is the services it puts around commerce, says McLaughlin. IDG's Derek Hulitzky sat down with McLaughlin and Mark Kwapiszeski, president of shared components and security solutions at Mastercard, to discuss how the company turns anonymized and aggregated data into valuable business insights and their advice for getting the best results out of machine learning models.


Edited Nearest Neighbors ENN

#artificialintelligence

Hi there, is everything cool? Edited Nearest Neighbors Rule for undersampling involves using K 3 nearest neighbors to the data points that are misclassified and that are then removed before a K 1 classification rule is applied. This approach of resampling and classification was first proposed by Dennis Wilson in his 1972 paper titled "Asymptotic Properties of Nearest Neighbor Rules Using Edited Data." When used as an undersampling procedure, the rule can be applied to each example in the majority class, allowing those examples that are misclassified as belonging to the minority class to be removed and those correctly classified to remain. Let's see how can we apply the ENN And just like CNN, the ENN gives the best results when combined with another oversampling method like SMOTE.


On Constructivism in AI -- Past, Present and Future

#artificialintelligence

Constructivism is a knowledge and learning theory that can be applied to artificial intelligence. It argues that learning, knowledge, and understanding are constructive processes that build on prior knowledge. For example, rather than forming a single conception of the world, pieces of information are layered on top of our existing knowledge. When it comes to constructivism in AI, there is the belief that learning or knowledge is created by constructing internal models of the world that are constantly adjusted to fit with new experiences. Constructivism in AI affirms that machine intelligence is best realized by programming machine intelligence systems to behave like infants, with instinctive reflexes, and then gradually learning how to interact with their surroundings.



Applying Regression Conformal Prediction with Nearest Neighbors to time series data

arXiv.org Machine Learning

In this paper, we apply conformal prediction to time series data. Conformal prediction isa method that produces predictive regions given a confidence level. The regions outputs arealways valid under the exchangeability assumption. However, this assumption does not holdfor the time series data because there is a link among past, current, and future observations.Consequently, the challenge of applying conformal predictors to the problem of time seriesdata lies in the fact that observations of a time series are dependent and therefore do notmeet the exchangeability assumption. This paper aims to present a way of constructingreliable prediction intervals by using conformal predictors in the context of time series. Weuse the nearest neighbors method based on the fast parameters tuning technique in theweighted nearest neighbors (FPTO-WNN) approach as the underlying algorithm. Dataanalysis demonstrates the effectiveness of the proposed approach.


Terminal Embeddings in Sublinear Time

arXiv.org Machine Learning

Recently (Elkin, Filtser, Neiman 2017) introduced the concept of a {\it terminal embedding} from one metric space $(X,d_X)$ to another $(Y,d_Y)$ with a set of designated terminals $T\subset X$. Such an embedding $f$ is said to have distortion $\rho\ge 1$ if $\rho$ is the smallest value such that there exists a constant $C>0$ satisfying \begin{equation*} \forall x\in T\ \forall q\in X,\ C d_X(x, q) \le d_Y(f(x), f(q)) \le C \rho d_X(x, q) . \end{equation*} In the case that $X,Y$ are both Euclidean metrics with $Y$ being $m$-dimensional, recently (Narayanan, Nelson 2019), following work of (Mahabadi, Makarychev, Makarychev, Razenshteyn 2018), showed that distortion $1+\epsilon$ is achievable via such a terminal embedding with $m = O(\epsilon^{-2}\log n)$ for $n := |T|$. This generalizes the Johnson-Lindenstrauss lemma, which only preserves distances within $T$ and not to $T$ from the rest of space. The downside is that evaluating the embedding on some $q\in \mathbb{R}^d$ required solving a semidefinite program with $\Theta(n)$ constraints in $m$ variables and thus required some superlinear $\mathrm{poly}(n)$ runtime. Our main contribution in this work is to give a new data structure for computing terminal embeddings. We show how to pre-process $T$ to obtain an almost linear-space data structure that supports computing the terminal embedding image of any $q\in\mathbb{R}^d$ in sublinear time $n^{1-\Theta(\epsilon^2)+o(1)} + dn^{o(1)}$. To accomplish this, we leverage tools developed in the context of approximate nearest neighbor search.


TCube: Domain-Agnostic Neural Time-series Narration

arXiv.org Artificial Intelligence

The task of generating rich and fluent narratives that aptly describe the characteristics, trends, and anomalies of time-series data is invaluable to the sciences (geology, meteorology, epidemiology) or finance (trades, stocks, or sales and inventory). The efforts for time-series narration hitherto are domain-specific and use predefined templates that offer consistency but lead to mechanical narratives. We present TCube (Time-series-to-text), a domain-agnostic neural framework for time-series narration, that couples the representation of essential time-series elements in the form of a dense knowledge graph and the translation of said knowledge graph into rich and fluent narratives through the transfer-learning capabilities of PLMs (Pre-trained Language Models). TCube's design primarily addresses the challenge that lies in building a neural framework in the complete paucity of annotated training data for time-series. The design incorporates knowledge graphs as an intermediary for the representation of essential time-series elements which can be linearized for textual translation. To the best of our knowledge, TCube is the first investigation of the use of neural strategies for time-series narration. Through extensive evaluations, we show that TCube can improve the lexical diversity of the generated narratives by up to 65.38% while still maintaining grammatical integrity. The practicality and deployability of TCube is further validated through an expert review (n=21) where 76.2% of participating experts wary of auto-generated narratives favored TCube as a deployable system for time-series narration due to its richer narratives. Our code-base, models, and datasets, with detailed instructions for reproducibility is publicly hosted at https://github.com/Mandar-Sharma/TCube.


A guided journey through non-interactive automatic story generation

arXiv.org Artificial Intelligence

We present a literature survey on non-interactive computational story generation. The article starts with the presentation of requirements for creative systems, three types of models of creativity (computational, socio-cultural, and individual), and models of human creative writing. Then it reviews each class of story generation approach depending on the used technology: story-schemas, analogy, rules, planning, evolutionary algorithms, implicit knowledge learning, and explicit knowledge learning. Before the concluding section, the article analyses the contributions of the reviewed work to improve the quality of the generated stories. This analysis addresses the description of the story characters, the use of narrative knowledge including about character believability, and the possible lack of more comprehensive or more detailed knowledge or creativity models. Finally, the article presents concluding remarks in the form of suggestions of research topics that might have a significant impact on the advancement of the state of the art on autonomous non-interactive story generation systems. The article concludes that the autonomous generation and adoption of the main idea to be conveyed and the autonomous design of the creativity ensuring criteria are possibly two of most important topics for future research.


Interactively Generating Explanations for Transformer Language Models

arXiv.org Artificial Intelligence

Transformer language models are state-of-the-art in a multitude of NLP tasks. Despite these successes, their opaqueness remains problematic. Recent methods aiming to provide interpretability and explainability to black-box models primarily focus on post-hoc explanations of (sometimes spurious) input-output correlations. Instead, we emphasize using prototype networks directly incorporated into the model architecture and hence explain the reasoning process behind the network's decisions. Moreover, while our architecture performs on par with several language models, it enables one to learn from user interactions. This not only offers a better understanding of language models but uses human capabilities to incorporate knowledge outside of the rigid range of purely data-driven approaches.