Machine learning could provide up an extra hour of warning time for debris flows along the Illgraben torrent in Switzerland, researchers report at the Seismological Society of America (SSA)'s 2021 Annual Meeting. Debris flows are mixtures of water, sediment and rock that move rapidly down steep hills, triggered by heavy precipitation and often containing tens of thousands of cubic meters of material. Their destructive potential makes it important to have monitoring and warning systems in place to protect nearby people and infrastructure. In her presentation at SSA, Ma?gorzata Chmiel of ETH Zürich described a machine learning approach to detecting and alerting against debris flows for the Illgraben torrent, a site in the European Alps that experiences significant debris flows and torrential events each year. Seismic records from stations located in the Illgraben catchment, from 20 previous debris flow events, were used to train an algorithm to recognize the seismic signals of debris flow formation, accurately detecting early flows 90% of the time. The machine learning system was able to detect all 13 debris flows and torrential events that occurred during a three-month period in 2020.
BERLIN – A turning point for Rafael Yuste, a neuroscientist at New York's Columbia University, came when his lab discovered it could activate a few neurons in a mouse's visual cortex and make it hallucinate. The mouse had been trained to lick at a water spout every time it saw two vertical bars, and researchers were able to prompt it to drink even with no bars in sight, said Yuste, whose team published a study on the experiment in 2019. "We could make the animal see something it didn't see, as if it were a puppet," he said in a phone interview. "If we can do this today with an animal, we can do it tomorrow with a human for sure." Yuste is part of a group of scientists and lawmakers, stretching from Switzerland to Chile, who are working to rein in the potential abuses of neuroscience by companies from tech giants to wearable startups.
Would you like to build predictive models using machine learning? That s precisely what you will learn in this course "Decision Trees, Random Forests and Gradient Boosting in R." My name is Carlos Martínez, I have a Ph.D. in Management from the University of St. Gallen in Switzerland. I have presented my research at some of the most prestigious academic conferences and doctoral colloquiums at the University of Tel Aviv, Politecnico di Milano, University of Halmstad, and MIT. Furthermore, I have co-authored more than 25 teaching cases, some of them included in the case bases of Harvard and Michigan. This is a very comprehensive course that includes presentations, tutorials, and assignments. The course has a practical approach based on the learning-by-doing method in which you will learn decision trees and ensemble methods based on decision trees using a real dataset.
IDSIA has a very broad range of research interests, spanning most of Artificial Intelligence as it is understood today: machine learning, including deep learning/neural networks, control and signal processing, natural language processing, robotics, computer vision, search and optimisation, and more fundamental questions in uncertainty, probability, statistics, causal inference. To give an example, we have a 4-year Data project funded by the National Science Foundation as part of Switzerland's National Research Programme 75 "Big Data". In this project we deal with Gaussian processes, which can be understood as statistical neural networks, which can then provide uncertainty estimates relating to their own predictions – unlike traditional neural nets. This is very important in applications where we are evaluating risks. For example, a self-driving car needs to know whether the car's sensors are reliably warning of a potential accident ahead rather than a a person safely crossing the street.
In this post we conclude our summaries of the NeurIPS invited talks from the 2020 meeting. In this final instalment, we cover the talks by Marloes Maathuis (ETH Zurich) and Anthony M Zador (Cold Spring Harbor Laboratory). Marloes began her talk on causal learning with a simple example of the phenomenon known as Simpson's paradox, in which a trend appears in several different groups of data but disappears or reverses when these groups are combined. She also talked about the importance of considering causality when making decisions based on such data. Marloes went on to explain the difference between causal and non-causal questions. Non-causal questions are about predictions in the same system, for example, predicting the cancer rate among smokers.
Are you searching for the best artificial intelligence startup companies in Switzerland that can help with data security and cybercrime? Artificial intelligence helps businesses identify bugs and unusual user behavior in enterprise systems like ERP and economic institutions. As a result, by integrating artificial intelligence solutions into your company, you can protect your data and prevent cyberattacks. Sophia Genetics offers a platform for optimizing genomic research based on artificial intelligence (AI). The platform processes and analyzes genomic data from patient DNA sequence data produced by the NGS platform using machine learning algorithms.
One of our engineers, David Plowman, describes machine learning and shares news of a Raspberry Pi depth estimation challenge run by ETH Zürich (Swiss Federal Institute of Technology). Spoiler alert – it's all happening virtually, so you can definitely make the trip and attend, or maybe even enter yourself. Machine Learning (ML) and Artificial Intelligence (AI) are some of the top engineering-related buzzwords of the moment, and foremost among current ML paradigms is probably the Artificial Neural Network (ANN). They involve millions of tiny calculations, merged together in a giant biologically inspired network – hence the name. These networks typically have millions of parameters that control each calculation, and they must be optimised for every different task at hand.
November 10, 2020 (LifeSiteNews) -- The COVID-19 pandemic was manufactured by the world's elites as part of a plan to globally advance "transhumanism" -- literally, the fusion of human beings with technology in an attempt to alter human nature itself and create a superhuman being and an "earthly paradise," according to a Peruvian academic and expert in technology. This dystopian nightmare scenario is no longer the stuff of science fiction, but an integral part of the proposed post-pandemic "Great Reset," Dr. Miklos Lukacs de Pereny said at a recent summit on COVID-19. Indeed, to the extent that implementing the transhumanist agenda is possible, it requires the concentration of political and economic power in the hands of a global elite and the dependence of people on the state, said Lukacs. That's precisely the aim of the Great Reset, promoted by German economist Klaus Schwab, CEO and founder of World Economic Forum, along with billionaire "philanthropists" George Soros and Bill Gates and other owners, managers, and shareholders of Big Tech, Big Pharma, and Big Finance who meet at the WEF retreats at Davos, Switzerland, contended Lukacs. Transhumanism is far from a benign doctrine.
While modern cameras provide machines with a very well-developed sense of vision, robots still lack such a comprehensive solution for their sense of touch. At ETH Zurich, in the group led by Prof. Raffaello D'Andrea at the Institute for Dynamic Systems and Control, we have developed a tactile sensing principle that allows robots to retrieve rich contact feedback from their interactions with the environment. I recently described our approach in a TEDx talk at the last TEDxZurich. The talk features a tech demo that introduces the novel tactile sensing technology targeting the next generation of soft robotic skins. The sensing technique is based on a camera that tracks fluorescent particles, which are densely and randomly distributed within a soft, deformable gel. The randomness of the patterns simplifies production of the gel and their density provides strain information at each pixel of the resulting image.