If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
We propose a new sketch recognition framework that combines a rich representation of low level visual appearance with a graphical model for capturing high level relationships between symbols. This joint model of appearance and context allows our framework to be less sensitive to noise and drawing variations, improving accuracy and robustness. The result is a recognizer that is better able to handle the wide range of drawing styles found in messy freehand sketches. We evaluate our work on two real-world domains, molecular diagrams and electrical circuit diagrams, and show that our combined approach significantly improves recognition performance. Papers published at the Neural Information Processing Systems Conference.
During its Build conference today, Microsoft introduced Project Ink Analysis, which does exactly what you'd think: Make sense of digital writing. The toolkit both understands words and provides features typically found in text editors, like alignment and bulleting. While Project Ink Analysis is still in its experimental stages, it could obviously help anyone who habitually writes with styluses on digital platforms. It might not garner deep insights into your personality like IBM Watson, but its simple beautification tools can clean up chickenscratch and even translate from 67 languages. It could be plenty useful for all the Surface Pen users out there who want their scrawling handwriting to look just a bit more professional (and legible).
Forbus, Kenneth D. (Northwestern University) | Garnier, Bridget (University of Wisconsin-Madison) | Tikoff, Basil (University of Wisconsin-Madison) | Marko, Wayne (Northwestern University) | Usher, Madeline (Northwestern University) | McLure, Matthew (Northwestern University)
Sketching can be a valuable tool for science education, but it is currently underutilized. Sketch worksheets were developed to help change this, by using AI technology to give students immediate feedback and to give instructors assistance in grading. Sketch worksheets use visual representations automatically computed by CogSketch, which are combined with conceptual information from the OpenCyc ontology. Feedback is provided to students by comparing an instructor’s sketch to a student’s sketch, using the Structure-Mapping Engine. This paper describes our experiences in deploying sketch worksheets in two types of classes: Geoscience and AI. Sketch worksheets for introductory geoscience classes were developed by geoscientists at University of Wisconsin-Madison, authored using CogSketch and used in classes at both Wisconsin and Northwestern University. Sketch worksheets were also developed and deployed for a knowledge representation and reasoning course at Northwestern. Our experience indicates that sketch worksheets can provide helpful on-the-spot feedback to students, and significantly improve grading efficiency, to the point where sketching assignments can be more practical to use broadly in STEM education.
Useful feedback makes use of models of domain-specific knowledge, especially models that are commonly held by potential students. To empirically determine what these models are, student data can be clustered to reveal common misconceptions or common problem-solving strategies. This article describes how analogical retrieval and generalization can be used to cluster automatically analyzed handdrawn sketches incorporating both spatial and conceptual information. We use this approach to cluster a corpus of hand-drawn student sketches to discover common answers. Common answer clusters can be used for the design of targeted feedback and for assessment.
Inking and navigating with a digital pen or stylus within Windows 10 will become easier within the Fall Creators Update, for those of you who use a tablet as, you know, a tablet. The improvements include two major elements: navigation, including using the pen or stylus to select and scroll text; and better interpretation of inked words as text, via a more accurate and responsive handwriting panel. Combined, it's a love letter of sorts to Surface and other tablet users who use the pen to input data. It's amazing how well Windows can interpret your chicken-scratch into text that can be edited in Word and elsewhere. General Windows 10 users won't be able to take advantage of the new features until the launch of the Fall Creators Update on Oct. 17.
The computation model was built on CogSketch, a sketch-understanding system developed in Forbus' laboratory at Northwestern University. Sketching is a natural activity that people do while thinking or trying to communicate an idea, especially when spatial content is involved. Sketching is also heavily used in engineering and geoscience. CogSketch is used to model spatial understanding and reasoning, making it suitable for research based on sketches, but also for testing against a standardized visual intelligence test such as the Raven's Progressive Matrices test.
Northwestern University Application: CogSketch People sketch to work through ideas and to communicate, especially when dealing with spatial matters. Software that could participate in sketching could revolutionize spatial education, and provide a new kind of instrument for cognitive science research, as well as being an important scientific advance in its own right. The goal of the CogSketch project is to do the research and development needed to create a sketch understanding system that can be used as an instrument for cognitive science research and as a platform for educational software. This system, called CogSketch, is being developed by the Spatial Intelligence and Learning Center (SILC), a National Science Foundation Sciences of Learning Center. The vision is that, in ten years or less, sketch-base educational software can be as widely available to students as graphing calculators are today.
The Gromov-Hausdorff distance provides a metric on the set of isometry classes of compact metric spaces. Unfortunately, computing this metric directly is believed to be computationally intractable. Motivated by applications in shape matching and point-cloud comparison, we study a semidefinite programming relaxation of the Gromov-Hausdorff metric. This relaxation can be computed in polynomial time, and somewhat surprisingly is itself a pseudometric. We describe the induced topology on the set of compact metric spaces. Finally, we demonstrate the numerical performance of various algorithms for computing the relaxed distance and apply these algorithms to several relevant data sets. In particular we propose a greedy algorithm for finding the best correspondence between finite metric spaces that can handle hundreds of points.
Automatically solving geometry questions is a long-standing AI problem. A geometry question typically includes a textual description accompanied by a diagram. The first step in solving geometry questions is diagram understanding, which consists of identifying visual elements in the diagram, their locations, their geometric properties, and aligning them to corresponding textual descriptions. In this paper, we present a method for diagram understanding that identifies visual elements in a diagram while maximizing agreement between textual and visual data. We show that the method's objective function is submodular; thus we are able to introduce an efficient method for diagram understanding that is close to optimal. To empirically evaluate our method, we compile a new dataset of geometry questions (textual descriptions and diagrams) and compare with baselines that utilize standard vision techniques. Our experimental evaluation shows an F1 boost of more than 17% in identifying visual elements and 25% in aligning visual elements with their textual descriptions.
In this paper, we target at the problem of sketch recognition. We systematically study how to incorporate users' correction and editing into isolated and full sketch recognition. This is a natural and necessary interaction in real systems such as Visio where very similar shapes exist. First, a novel algorithm is proposed to mine the prior shape knowledge for three editing modes. Second, to differentiate visually similar shapes, a novel symbol recognition algorithm is introduced by leveraging the learnt shape knowledge. Then, a novel editing detection algorithm is proposed to facilitate symbol recognition. Furthermore, both of the symbol recognizer and the editing detector are systematically incorporated into the full sketch recognition. Finally, based on the proposed algorithms, a real-time sketch recognition system is built to recognize hand-drawn flowcharts and diagrams with flexible interactions. Extensive experiments show the effectiveness of the proposed algorithms.