Goto

Collaborating Authors

Dane County


Turn-Taking and Coordination in Human-Machine Interaction

AI Magazine

This issue of AI Magazine brings together a collection of articles on challenges, mechanisms, and research progress in turn-taking and coordination between humans and machines. The contributing authors work in interrelated fields of spoken dialog systems, intelligent virtual agents, human-computer interaction, human-robot interaction, and semiautonomous collaborative systems and explore core concepts in coordinating speech and actions with virtual agents, robots, and other autonomous systems. Several of the contributors participated in the AAAI Spring Symposium on Turn-Taking and Coordination in Human-Machine Interaction, held in March 2015, and several articles in this issue are extensions of work presented at that symposium. The articles in the collection address key modeling, methodological, and computational challenges in achieving effective coordination with machines, propose solutions that overcome these challenges under sensory, cognitive, and resource restrictions, and illustrate how such solutions can facilitate coordination across diverse and challenging domains.


Simple 'smart' glass reveals the future of artificial vision

#artificialintelligence

From left to right, Zongfu Yu, Ang Chen and Efram Khoram developed the concept for a "smart" piece of glass that recognizes images without any external power or circuits. The sophisticated technology that powers face recognition in many modern smartphones someday could receive a high-tech upgrade that sounds -- and looks -- surprisingly low-tech. This window to the future is none other than a piece of glass. University of Wisconsin–Madison engineers have devised a method to create pieces of "smart" glass that can recognize images without requiring any sensors or circuits or power sources. "We're using optics to condense the normal setup of cameras, sensors and deep neural networks into a single piece of thin glass," says UW-Madison electrical and computer engineering professor Zongfu Yu.


Machine learning speeds up the development of biofuel production process - College of Engineering - University of Wisconsin-Madison

#artificialintelligence

Someday soon, oil refineries may trade in crude oil for agricultural waste like corn stalks or renewable plants like switchgrass in order to produce sustainable biofuels. But we're not there quite yet; converting those products into usable chemicals on a large scale requires efficient catalytic reactions, which researchers are still hunting for. Recently, Conway Assistant Professor Reid Van Lehn and his colleagues in the Department of Chemical and Biological Engineering have found a way to speed up the process of finding suitable reaction conditions using machine learning, which may help the era of biofuels come a little bit sooner. One of the ways to convert lignocellulosic biomass into usable fuels is via acid-catalyzed reactions, which usually take place in water. It's often a slow process, but research has shown that the addition of certain organic cosolvents can increase reaction rates 100-fold or more.


Optimal Confidence Regions for the Multinomial Parameter

arXiv.org Machine Learning

A confidence region for p is a subset of the k -simplex that depends on null p, and includes the unknown true distribution p with a specified confidence. More precisely, C δ( null p) k is a confidence region at confidence level 1 δ if P p (p null C δ( null p)) δ (1) holds for all p k, where k denotes the k -simplex, and P p(·) is the multinomial probability measure under p . Construction of tight confidence regions for categorical distributions is a long standing problem dating back nearly a hundred years [1]. The goal is to construct regions that are as small as possible, but still satisfy (1). Broadly speaking, approaches for constructing confidence regions can be classified into i) approximate methods that fail to guarantee coverage (i.e, (1) fails to hold for all p) and ii) methods that succeed in guaranteeing coverage, but have excessive volume - for example, approaches based on Sanov or Hoeffding-Bernstein type inequalities. Recent approaches based on combinations of methods [2] have shown improvement through numerical experiment, but do not provide theoretical guarantees on the volume of the confidence regions. To the best of our knowledge, construction of confidence regions for the multinomial parameter that have minimal volume and guarantee coverage is an open problem. One construction that has shown promise empirically is the level-set approach of [3]. The level set confidence regions are similar to'exact' and Clopper-Pearson 1 regions [1] as they involve inverting tail Authors are with the Electrical & Computer Engineering Department at University of Wisconsin-Madison.


Research into machine-learning specialty finds new home at USC Viterbi

#artificialintelligence

With a new $1.5 million grant, the growing field of transfer learning has come to the Ming Hsieh Department of Electrical and Computer Engineering at the USC Viterbi School of Engineering. The grant was awarded to three professors -- Salman Avestimehr, Antonio Ortega and Mahdi Soltanolkotabi -- who will work with Ilias Diakonikolas at the University of Wisconsin, Madison, to address the theoretical foundations of this field. Modern machine learning models are breaking new ground in data science, achieving unprecedented performance on tasks like classifying images in one thousand different image categories. This is achieved by training gigantic neural networks. "Neural networks work really well because they can be trained on huge amounts of pre-existing data that has previously been tagged and collected," said Avestimehr, the primary investigator of the project.


IDSS Distinguished Speaker Seminar with Rob Nowak (University of Wisconsin-Madison)

#artificialintelligence

Title: Theoretical Foundations of Active Machine Learning Abstract: The field of Machine Learning (ML) has advanced considerably in recent years, but mostly in well-defined domains using huge amounts of human-labeled training data. Machines can recognize objects in images and translate text, but they must be trained with more images and text than a person can see in nearly a lifetime. The computational complexity of training has been offset by recent technological advances, but the cost of training data is measured in terms of the human effort in labeling data. People are not getting faster nor cheaper, so generating labeled training datasets has become a major bottleneck in ML pipelines. Active ML aims to address this issue by designing learning algorithms that automatically and adaptively select the most informative examples for labeling so that human time is not wasted labeling irrelevant, redundant, or trivial examples.


Wisconsin Quantum Institute Awarded Grant to Advance Quantum Computing Machine Learning

#artificialintelligence

The U.S. Department of Energy recently announced the funding of another set of quantum science-driven research proposals, including that of Sau Lan Wu, Enrico Fermi professor of physics and Vilas Professor at the University of Wisconsin – Madison. With the funding, Wu and her collaborators seek to tap into the power of quantum computing to analyze the wealth of data generated by high energy physics experiments. The title of Wu's DOE approved project is: "Application of Quantum Machine Learning to High Energy Physics Analysis at LHC using IBM Quantum Computer Simulators and IBM Quantum Computer Hardware". Wu, a member of the Chicago Quantum Exchange (CQE) and Wisconsin Quantum Institute at UW–Madison who conducts her research at the Large Hadron Collider (LHC) at CERN in Geneva, Switzerland, was one of only six university-based investigators – those outside of National Labs – to be awarded the DOE quantum funds for particle physicists. "The ambitious HL-LHC program will require enormous computing resources in the next two decades," says Wu. "A burning question is whether quantum computers can solve the ever-growing demand for computing resources, and our goal here is to explore and to demonstrate that quantum computing can be the new paradigm."


World first after researchers create an AI made of glass – Fanatical Futurist by International Keynote Speaker Matthew Griffin

#artificialintelligence

Connect, download a free E-Book, watch a keynote, or browse my blog. Recently, I talked about a team of researchers in the US that had managed to 3D print an Artificial Intelligence (AI) neural network, and another team that had made a complex neural network from DNA. But now, in another new development in the field of what's known as Diffractive Neural Networks, a team of researchers have created the smartest piece of glass in the known universe. Zongfu Yu at the University of Wisconsin–Madison and his colleagues have created a glass based AI that uses light to recognise and distinguish between images. What's more, the glass AI doesn't need to be powered to operate.


Machine learning and its radical application to severe weather prediction

#artificialintelligence

In the last decade, artificial intelligence ("AI") applications have exploded across various research sectors, including computer vision, communications and medicine. Now, the rapidly developing technology is making its mark in weather prediction. The fields of atmospheric science and satellite meteorology are ideally suited for the task, offering a rich training ground capable of feeding an AI system's endless appetite for data. Anthony Wimmers is a scientist with the University of Wisconsin–Madison Cooperative Institute for Meteorological Satellite Studies (CIMSS) who has been working with AI systems for the last three years. His latest research investigates how an AI model can help improve short-term forecasting (or "nowcasting") of hurricanes.


Machine learning and its radical application to severe weather prediction

#artificialintelligence

In the last decade, artificial intelligence ("AI") applications have exploded across various research sectors, including computer vision, communications and medicine. Now, the rapidly developing technology is making its mark in weather prediction. The fields of atmospheric science and satellite meteorology are ideally suited for the task, offering a rich training ground capable of feeding an AI system's endless appetite for data. Anthony Wimmers is a scientist with the University of Wisconsin–Madison Cooperative Institute for Meteorological Satellite Studies (CIMSS) who has been working with AI systems for the last three years. His latest research investigates how an AI model can help improve short-term forecasting (or "nowcasting") of hurricanes.