Goto

Collaborating Authors

algorithm


Scalable Signal Reconstruction for a Broad Range of Applications

Communications of the ACM

Signal reconstruction problem (SRP) is an important optimization problem where the objective is to identify a solution to an underdetermined system of linear equations that is closest to a given prior. It has a substantial number of applications in diverse areas, such as network traffic engineering, medical image reconstruction, acoustics, astronomy, and many more. Unfortunately, most of the common approaches for solving SRP do not scale to large problem sizes. We propose a novel and scalable algorithm for solving this critical problem. Specifically, we make four major contributions. First, we propose a dual formulation of the problem and develop the DIRECT algorithm that is significantly more efficient than the state of the art. Second, we show how adapting database techniques developed for scalable similarity joins provides a substantial speedup over DIRECT. Third, we describe several practical techniques that allow our algorithm to scale--on a single machine--to settings that are orders of magnitude larger than previously studied. Finally, we use the database techniques of materialization and reuse to extend our result to dynamic settings where the input to the SRP changes. Extensive experiments on real-world and synthetic data confirm the efficiency, effectiveness, and scalability of our proposal. The database community has been at the forefront of grappling with challenges of big data and has developed numerous techniques for the scalable processing and analysis of massive datasets. These techniques often originate from solving core data management challenges but then find their way into effectively addressing the needs of big data analytics. We study how database techniques can benefit large-scale signal reconstruction,13 which is of interest to research communities as diverse as computer networks,15 medical imaging,7 etc. We demonstrate that the scalability of existing solutions can be significantly improved using ideas originally developed for similarity joins5 and selectivity estimation for set similarity queries.3 Signal reconstruction problem (SRP): The essence of SRP is to solve a linear system of the form AX b, where X is a high-dimensional unknown signal (represented by an m-d vector in Rm), b is a low-dimensional projection of X that can be observed in practice (represented by an n-d vector in Rn with n m), and A is an n m matrix that captures the linear relationship between X and b.


This AI Could Go From 'Art' to Steering a Self-Driving Car

WIRED

You've probably never wondered what a knight made of spaghetti would look like, but here's the answer anyway--courtesy of a clever new artificial intelligence program from OpenAI, a company in San Francisco. The program, DALL-E, released earlier this month, can concoct images of all sorts of weird things that don't exist, like avocado armchairs, robot giraffes, or radishes wearing tutus. OpenAI generated several images, including the spaghetti knight, at WIRED's request. DALL-E is a version of GPT-3, an AI model trained on text scraped from the web that's capable of producing surprisingly coherent text. DALL-E was fed images and accompanying descriptions; in response, it can generate a decent mashup image.


Nine Experts on the Single Biggest Obstacle Facing AI and Algorithms in the Next Five Years

#artificialintelligence

Five years ago, the world of artificial intelligence--and the algorithms it runs on--looked very different. Asking your Google Home to play Adele's chart-topping single wasn't possible yet. IBM Watson was still widely considered a beacon for AI advancement, and DeepMind's AI victory over a human at Go was still fresh. Machine learning engineers were facing earlier versions of today's image classification and speech recognition challenges. And though most tech giants hadn't earmarked corporate funding for ethical AI, the conversation was becoming more mainstream as the impact of algorithms on human lives became clearer.


The trouble with AI: Why we need new laws to stop algorithms ruining our lives

#artificialintelligence

Stronger action needs to be taken to stop technologies like facial recognition from being used to violate fundamental human rights, because the ethics charters currently adopted by businesses and governments won't cut it, warns a new report from digital rights organization Access Now. The past few years have seen "ethical AI" become a hot topic, with requirements such as oversight, safety, privacy, transparency, or accountability being added to codes of conduct for private and public organizations alike. From 5% in 2019, in fact, the proportion of organizations that now have an AI ethics charter has jumped to 45% in 2020. The EU's guidelines for "Trustworthy AI" have informed many of these documents; in addition, the European bloc recently published a white paper on artificial intelligence presenting a so-called "European framework for AI", with ethics at its core. How much real change has happened as a result of those ethical guidelines is up for debate.


New Algorithms Could Reduce Racial Disparities in Health Care

WIRED

Researchers trying to improve healthcare with artificial intelligence usually subject their algorithms to a form of machine med school. Software learns from doctors by digesting thousands or millions of x-rays or other data labeled by expert humans until it can accurately flag suspect moles or lungs showing signs of Covid-19 by itself. A study published this month took a different approach--training algorithms to read knee x-rays for arthritis by using patients as the AI arbiters of truth instead of doctors. The results revealed radiologists may have literal blind spots when it comes to reading Black patients' x-rays. The algorithms trained on patients' reports did a better job than doctors at accounting for the pain experienced by Black patients, apparently by discovering patterns of disease in the images that humans usually overlook.


A closer look at the AI Incident Database of machine learning failures

#artificialintelligence

The failures of artificial intelligent systems have become a recurring theme in technology news. Recommendation systems that promote violent content. Trending algorithms that amplify fake news. Most complex software systems fail at some point and need to be updated regularly. We have procedures and tools that help us find and fix these errors.


Why it's vital that AI is able to explain the decisions it makes

#artificialintelligence

Currently, our algorithm is able to consider a human plan for solving the Rubik's Cube, suggest improvements to the plan, recognize plans that do not work and find alternatives that do. In doing so, it gives feedback that leads to a step-by-step plan for solving the Rubik's Cube that a person can understand. Our team's next step is to build an intuitive interface that will allow our algorithm to teach people how to solve the Rubik's Cube. Our hope is to generalize this approach to a wide range of pathfinding problems.


Hybrid chip containing processors and memory runs AI on smart devices

#artificialintelligence

A group of researchers from Stanford have developed a way to combine processors and memory on multiple hybrid chips to allow AI to run on battery-powered devices such as smartphones and tablets. The team believes that all manner of battery-power electronics would be smarter if they could run AI algorithms. The problem is efforts to build AI-capable chips for mobile devices have run up against something known as the "memory wall." The memory wall is the name for the separation of data processing and memory chips that have to work together to meet the computational demands of AI. Computer scientist Subhasish Mitra says the transactions between processors and memory can consume 95 percent of the energy needed to perform machine learning and AI, severely limiting battery life.


Comparison of Read Mapping and Variant Calling Tools for the Analysis of Plant NGS Data

#artificialintelligence

High-throughput sequencing technologies have rapidly developed during the past years and have become an essential tool in plant sciences. However, the analysis of genomic data remains challenging and relies mostly on the performance of automatic pipelines. Frequently applied pipelines involve the alignment of sequence reads against a reference sequence and the identification of sequence variants. Since most benchmarking studies of bioinformatics tools for this purpose have been conducted on human datasets, there is a lack of benchmarking studies in plant sciences. In this study, we evaluated the performance of 50 different variant calling pipelines, including five read mappers and ten variant callers, on six real plant datasets of the model organism Arabidopsis thaliana. Sets of variants were evaluated based on various parameters including sensitivity and specificity. We found that all investigated tools are suitable for analysis of NGS data in plant research. When looking at different performance metrics, BWA-MEM and Novoalign were the best mappers and GATK returned the best results in the variant calling step.


This AI can explain how it solves Rubik's Cube--and that's a big deal

#artificialintelligence

However, these AI algorithms cannot explain the thought processes behind their decisions. A computer that masters protein folding and also tells researchers more about the rules of biology is much more useful than a computer that folds proteins without explanation. Therefore, AI researchers like me are now turning our efforts toward developing AI algorithms that can explain themselves in a manner that humans can understand. If we can do this, I believe that AI will be able to uncover and teach people new facts about the world that have not yet been discovered, leading to new innovations. One field of AI, called reinforcement learning, studies how computers can learn from their own experiences.