Goto

Collaborating Authors

Results


Neural Network Generates Global Tree Height Map, Reveals Carbon Stock Potential

#artificialintelligence

A new study from researchers at ETH Zurich's EcoVision Lab is the first to produce an interactive Global Canopy Height map. Using a newly developed deep learning algorithm that processes publicly available satellite images, the study could help scientists identify areas of ecosystem degradation and deforestation. The work could also guide sustainable forest management by identifying areas for prime carbon storage--a cornerstone in mitigating climate change. "Global high-resolution data on vegetation characteristics are needed to sustainably manage terrestrial ecosystems, mitigate climate change, and prevent biodiversity loss. With this project, we aim to fill the missing data gaps by merging data from two space missions with the help of deep learning," said Konrad Schindler, a Professor in the Department of Civil, Environmental, and Geomatic Engineering at ETH Zurich.


What do those meowing and neighing mean? Will artificial intelligence tell us how animals communicate?

#artificialintelligence

Have you ever wondered what if humans are able to communicate with animals and birds? With the help of the artificial intelligence (AI) algorithm, scientists recently revealed that they translated pig grunts into emotions for the first time. It could help them monitor animal wellbeing. Experts have claimed that the research, which was led by the University of Copenhagen, the ETH Zurich and France's National Research Institute for Agriculture, Food and Environment (INRAE) could also be used to better understand the emotions of other mammals. It looks like researchers are already working on it. As per a report published by The Wall Street Journal, it has been revealed that researchers are using AI to parse the "speech" of animals.


Global Big Data Conference

#artificialintelligence

Researchers at ETH Zurich and the Frankfurt School have developed an artificial neural network that can solve challenging control problems. The self-learning system can be used for the optimization of supply chains and production processes as well as for smart grids or traffic control systems. Power cuts, financial network failures and supply chain disruptions are just some of the many of problems typically encountered in complex systems that are very difficult or even impossible to control using existing methods. Control systems based on artificial intelligence (AI) can help to optimize complex processes--and can also be used to develop new business models. Together with Professor Lucas Böttcher from the Frankfurt School of Finance and Management, ETH researchers Nino Antulov-Fantulin and Thomas Asikis--both from the Chair of Computational Social Science--have developed a versatile AI-based control system called AI Pontryagin which is designed to steer complex systems and networks towards desired target states.


Controlling complex systems with artificial intelligence

#artificialintelligence

Researchers at ETH Zurich and the Frankfurt School have developed an artificial neural network that can solve challenging control problems. The self-learning system can be used for the optimization of supply chains and production processes as well as for smart grids or traffic control systems. Power cuts, financial network failures and supply chain disruptions are just some of the many of problems typically encountered in complex systems that are very difficult or even impossible to control using existing methods. Control systems based on artificial intelligence (AI) can help to optimize complex processes--and can also be used to develop new business models. Together with Professor Lucas Böttcher from the Frankfurt School of Finance and Management, ETH researchers Nino Antulov-Fantulin and Thomas Asikis--both from the Chair of Computational Social Science--have developed a versatile AI-based control system called AI Pontryagin which is designed to steer complex systems and networks towards desired target states.


BRACS: A Dataset for BReAst Carcinoma Subtyping in H&E Histology Images

arXiv.org Artificial Intelligence

Breast cancer is the most commonly diagnosed cancer and registers the highest number of deaths for women with cancer. Recent advancements in diagnostic activities combined with large-scale screening policies have significantly lowered the mortality rates for breast cancer patients. However, the manual inspection of tissue slides by the pathologists is cumbersome, time-consuming, and is subject to significant inter- and intra-observer variability. Recently, the advent of whole-slide scanning systems have empowered the rapid digitization of pathology slides, and enabled to develop digital workflows. These advances further enable to leverage Artificial Intelligence (AI) to assist, automate, and augment pathological diagnosis. But the AI techniques, especially Deep Learning (DL), require a large amount of high-quality annotated data to learn from. Constructing such task-specific datasets poses several challenges, such as, data-acquisition level constrains, time-consuming and expensive annotations, and anonymization of private information. In this paper, we introduce the BReAst Carcinoma Subtyping (BRACS) dataset, a large cohort of annotated Hematoxylin & Eosin (H&E)-stained images to facilitate the characterization of breast lesions. BRACS contains 547 Whole-Slide Images (WSIs), and 4539 Regions of Interest (ROIs) extracted from the WSIs. Each WSI, and respective ROIs, are annotated by the consensus of three board-certified pathologists into different lesion categories. Specifically, BRACS includes three lesion types, i.e., benign, malignant and atypical, which are further subtyped into seven categories. It is, to the best of our knowledge, the largest annotated dataset for breast cancer subtyping both at WSI- and ROI-level. Further, by including the understudied atypical lesions, BRACS offers an unique opportunity for leveraging AI to better understand their characteristics.


ELAINE Workshop 2021

#artificialintelligence

The increase in cancer cases, the democratization of healthcare, even the recent pandemic, are some of the numerous reasons pointing out that there is a great need in leveraging AI technology in patients' care practice. However, while AI has demonstrated the capability to be a valuable companion to practitioners, with respect to meeting accuracy levels, removing bias and increasing diagnostic throughput, its adoption to clinical practice is still slow. While the problem is more complex, in this instructional workshop we will focus on two critical aspects of adoption. AI technologies, notably deep learning techniques, may hide inherent risks such as the codification of biases, the weak accountability and the bare transparency of their decision-making process. AI technology needs to both improve the diagnostic power of the data processed but also provide evidence for the prediction in a user understandable way.


ETH Zurich and NVIDIA's Massively Parallel Deep RL Enables Robots to Learn to Walk in Minutes

#artificialintelligence

A new learned legged locomotion study uses massive parallelism on a single GPU to get robots up and walking on flat terrain in under four minutes, and on uneven terrain in twenty minutes. Although deep reinforcement learning (DRL) has achieved impressive results in robotics, the amount of data required to train a policy increases dramatically with task complexity. One way to improve the quality and time-to-deployment of DRL policies is to use massive parallelism. In the paper Learning to Walk in Minutes Using Massively Parallel Deep Reinforcement Learning, a research team from ETH Zurich and NVIDIA proposes a training framework that enables fast policy generation for real-world robotic tasks using massive parallelism on a single workstation GPU. Compared to previous methods, the approach can reduce training time by multiple orders of magnitude.


Spartus: A 9.4 TOp/s FPGA-based LSTM Accelerator Exploiting Spatio-temporal Sparsity

arXiv.org Artificial Intelligence

Long Short-Term Memory (LSTM) recurrent networks are frequently used for tasks involving time-sequential data such as speech recognition. However, it is difficult to deploy these networks on hardware to achieve high throughput and low latency because the fully connected structure makes LSTM networks a memory-bounded algorithm. Previous LSTM accelerators either exploited weight spatial sparsity or temporal activation sparsity. This paper proposes a new accelerator called "Spartus" that exploits spatio-temporal sparsity to achieve ultra-low latency inference. The spatial sparsity is induced using our proposed pruning method called Column-Balanced Targeted Dropout (CBTD), which structures sparse weight matrices for balanced workload. It achieved up to 96% weight sparsity with negligible accuracy difference for an LSTM network trained on a TIMIT phone recognition task. To induce temporal sparsity in LSTM, we create the DeltaLSTM by extending the previous DeltaGRU method to the LSTM network. This combined sparsity simultaneously saves on the weight memory access and associated arithmetic operations. Spartus was implemented on a Xilinx Zynq-7100 FPGA. The Spartus per-sample latency for a single DeltaLSTM layer of 1024 neurons averages 1 us. Spartus achieved 9.4 TOp/s effective batch-1 throughput and 1.1 TOp/J energy efficiency, which, respectively, are 4X and 7X higher than the previous state-of-the-art.


Accelerating Quadratic Optimization Up to 3x With Reinforcement Learning

#artificialintelligence

First-order methods for solving quadratic programs (QPs) are widely used for rapid, multiple-problem solving and embedded optimal control in large-scale machine learning. The problem is, these approaches typically require thousands of iterations, which makes them unsuitable for real-time control applications that have tight latency constraints. To address this issue, a research team from the University of California, Princeton University and ETH Zurich has proposed RLQP, an accelerated QP solver based on operator-splitting QP (OSQP) that uses deep reinforcement learning (RL) to compute a policy that adapts the internal parameters of a first-order quadratic program (QP) solver to speed up the solver's convergence rate. The team performed their speed-up on the OSQP solver, which solves QPs using a first-order alternating direction method of multipliers (ADMM), an efficient first-order optimization algorithm. The RLQP strives to learn a policy to adapt the internal parameters of the ADMM algorithm between iterations in order to minimize solve times.


Senior, Computer Vision R&D Engineer, SLAM/VIO

#artificialintelligence

Magic Leap's mission is to deliver enterprise a powerful tool for transformation-- an augmented reality platform of great utility and simplicity. Our ultimate vision is to amplify human potential. Our office in Zurich, Switzerland is a center of excellence for Computer Vision and Deep Learning. We are looking for exceptional engineers, passionate about shaping the future of computing. As a Computer Vision R&D Engineer, you'll be responsible for delivering high-performance production software with state-of-the-art computer vision capabilities in the field of SLAM and sensor fusion.