Goto

Collaborating Authors

Results


Deep Learning Engineer

#artificialintelligence

As a deep learning engineer at Arbrea Labs, you will be working at the interface between research and production code, where a wide array of different tasks arise in this unique context. As one of the key employees, you will have a strong influence on how our product will be shaped, giving you, altogether with the team, the opportunity to grow into world leaders in Medical AR/VR technology. You will work with friendly, passionate, and easy-going team members, be very willing to offer guidance, and have a beer (or two) after work or regular team events. The working environment is quite flexible, with offices in a Technology hub, attractive salaries and we offer employee participation plans. Arbrea Labs is an ETH spin-off from the Computer Graphics Lab, ETH Zurich.


Neural Network Generates Global Tree Height Map, Reveals Carbon Stock Potential

#artificialintelligence

A new study from researchers at ETH Zurich's EcoVision Lab is the first to produce an interactive Global Canopy Height map. Using a newly developed deep learning algorithm that processes publicly available satellite images, the study could help scientists identify areas of ecosystem degradation and deforestation. The work could also guide sustainable forest management by identifying areas for prime carbon storage--a cornerstone in mitigating climate change. "Global high-resolution data on vegetation characteristics are needed to sustainably manage terrestrial ecosystems, mitigate climate change, and prevent biodiversity loss. With this project, we aim to fill the missing data gaps by merging data from two space missions with the help of deep learning," said Konrad Schindler, a Professor in the Department of Civil, Environmental, and Geomatic Engineering at ETH Zurich.


The Promise & Peril of Brain Machine Interfaces, with Ricardo Chavarriaga

#artificialintelligence

ANJA KASPERSEN: Today's podcast will focus on artificial intelligence (AI), neuroscience, and neurotechnologies. My guest today is Ricardo Chavarriaga. Ricardo is an electrical engineer and a doctor of computational neuroscience. He is currently the head of the Swiss office of the Confederation of Laboratories for AI Research in Europe (CLAIRE) and a senior researcher at Zurich University of Applied Sciences. Ricardo, it is an honor and a delight to share the virtual stage with you today. I am really happy and looking forward to a nice discussion today. ANJA KASPERSEN: Neuroscience is a vast and fast-developing field. Maybe you could start by providing our listeners with some background. When we think about the brain, this is something that has fascinated humanity for a long time. The question of how this organ that we have inside our heads can rule our behavior and can store and develop knowledge has been indeed one of the questions for science for many, many years. Neurotechnologies, computational neuroscience, and brain-machine interfaces are tools that we have developed to approach the understanding of this fabulous organ. When we talk about computational neuroscience it is the use of computational tools to create models of the brain. It can be mathematical models, it can be algorithms that try to reproduce our observations about the brain. It can be experiments on humans and on animals: these experiments can be behavioral, they can involve measurements of brain activity, and by looking at how the brains of organisms react and how the activity changes we will then try to apply our knowledge to create models for that. These models can have different flavors. We can for instance have very detailed models of electrochemical processes inside a neuron, and then we are looking at just a small part of the brain. We can have large-scale models with fewer details of how different brain structures interact among themselves, or even less-detailed models that try to reproduce behavior that we observe in animals and in humans as a result of certain mental disorders. We can even test these models using probes to tap into how can our brain construct representations of the world based on images, based on tactile, and based on auditory information.


What do those meowing and neighing mean? Will artificial intelligence tell us how animals communicate?

#artificialintelligence

Have you ever wondered what if humans are able to communicate with animals and birds? With the help of the artificial intelligence (AI) algorithm, scientists recently revealed that they translated pig grunts into emotions for the first time. It could help them monitor animal wellbeing. Experts have claimed that the research, which was led by the University of Copenhagen, the ETH Zurich and France's National Research Institute for Agriculture, Food and Environment (INRAE) could also be used to better understand the emotions of other mammals. It looks like researchers are already working on it. As per a report published by The Wall Street Journal, it has been revealed that researchers are using AI to parse the "speech" of animals.


Global Big Data Conference

#artificialintelligence

Researchers at ETH Zurich and the Frankfurt School have developed an artificial neural network that can solve challenging control problems. The self-learning system can be used for the optimization of supply chains and production processes as well as for smart grids or traffic control systems. Power cuts, financial network failures and supply chain disruptions are just some of the many of problems typically encountered in complex systems that are very difficult or even impossible to control using existing methods. Control systems based on artificial intelligence (AI) can help to optimize complex processes--and can also be used to develop new business models. Together with Professor Lucas Böttcher from the Frankfurt School of Finance and Management, ETH researchers Nino Antulov-Fantulin and Thomas Asikis--both from the Chair of Computational Social Science--have developed a versatile AI-based control system called AI Pontryagin which is designed to steer complex systems and networks towards desired target states.


Controlling complex systems with artificial intelligence

#artificialintelligence

Researchers at ETH Zurich and the Frankfurt School have developed an artificial neural network that can solve challenging control problems. The self-learning system can be used for the optimization of supply chains and production processes as well as for smart grids or traffic control systems. Power cuts, financial network failures and supply chain disruptions are just some of the many of problems typically encountered in complex systems that are very difficult or even impossible to control using existing methods. Control systems based on artificial intelligence (AI) can help to optimize complex processes--and can also be used to develop new business models. Together with Professor Lucas Böttcher from the Frankfurt School of Finance and Management, ETH researchers Nino Antulov-Fantulin and Thomas Asikis--both from the Chair of Computational Social Science--have developed a versatile AI-based control system called AI Pontryagin which is designed to steer complex systems and networks towards desired target states.


Neural networks learn faster using ETH software

#artificialintelligence

Two researchers from the Scalable Parallel Computing Lab at the Swiss Federal Institute of Technology in Zurich (ETH) have developed a software solution to rapidly speed up the training of deep learning applications. This is important as this process is the most resource-demanding and costly step of all, ETH Zurich writes in a press release. It accounts for up to 85 percent of the training time. For example, a single training run of a sophisticated voice recognition model can cost around 10 million US dollars. The new software named NoPFS was developed by Roman Böhringer and Nikoli Dryden.


BRACS: A Dataset for BReAst Carcinoma Subtyping in H&E Histology Images

arXiv.org Artificial Intelligence

Breast cancer is the most commonly diagnosed cancer and registers the highest number of deaths for women with cancer. Recent advancements in diagnostic activities combined with large-scale screening policies have significantly lowered the mortality rates for breast cancer patients. However, the manual inspection of tissue slides by the pathologists is cumbersome, time-consuming, and is subject to significant inter- and intra-observer variability. Recently, the advent of whole-slide scanning systems have empowered the rapid digitization of pathology slides, and enabled to develop digital workflows. These advances further enable to leverage Artificial Intelligence (AI) to assist, automate, and augment pathological diagnosis. But the AI techniques, especially Deep Learning (DL), require a large amount of high-quality annotated data to learn from. Constructing such task-specific datasets poses several challenges, such as, data-acquisition level constrains, time-consuming and expensive annotations, and anonymization of private information. In this paper, we introduce the BReAst Carcinoma Subtyping (BRACS) dataset, a large cohort of annotated Hematoxylin & Eosin (H&E)-stained images to facilitate the characterization of breast lesions. BRACS contains 547 Whole-Slide Images (WSIs), and 4539 Regions of Interest (ROIs) extracted from the WSIs. Each WSI, and respective ROIs, are annotated by the consensus of three board-certified pathologists into different lesion categories. Specifically, BRACS includes three lesion types, i.e., benign, malignant and atypical, which are further subtyped into seven categories. It is, to the best of our knowledge, the largest annotated dataset for breast cancer subtyping both at WSI- and ROI-level. Further, by including the understudied atypical lesions, BRACS offers an unique opportunity for leveraging AI to better understand their characteristics.


ELAINE Workshop 2021

#artificialintelligence

The increase in cancer cases, the democratization of healthcare, even the recent pandemic, are some of the numerous reasons pointing out that there is a great need in leveraging AI technology in patients' care practice. However, while AI has demonstrated the capability to be a valuable companion to practitioners, with respect to meeting accuracy levels, removing bias and increasing diagnostic throughput, its adoption to clinical practice is still slow. While the problem is more complex, in this instructional workshop we will focus on two critical aspects of adoption. AI technologies, notably deep learning techniques, may hide inherent risks such as the codification of biases, the weak accountability and the bare transparency of their decision-making process. AI technology needs to both improve the diagnostic power of the data processed but also provide evidence for the prediction in a user understandable way.


ETH Zurich and NVIDIA's Massively Parallel Deep RL Enables Robots to Learn to Walk in Minutes

#artificialintelligence

A new learned legged locomotion study uses massive parallelism on a single GPU to get robots up and walking on flat terrain in under four minutes, and on uneven terrain in twenty minutes. Although deep reinforcement learning (DRL) has achieved impressive results in robotics, the amount of data required to train a policy increases dramatically with task complexity. One way to improve the quality and time-to-deployment of DRL policies is to use massive parallelism. In the paper Learning to Walk in Minutes Using Massively Parallel Deep Reinforcement Learning, a research team from ETH Zurich and NVIDIA proposes a training framework that enables fast policy generation for real-world robotic tasks using massive parallelism on a single workstation GPU. Compared to previous methods, the approach can reduce training time by multiple orders of magnitude.