Goto

Collaborating Authors

Switzerland


To PiM or Not to PiM

Communications of the ACM

A 20nm 6GB function-in-memory DRAM, based on HBM2 with a 1.2 TFLOPS programmable computing unit using bank-level parallelism, for machine learning applications.


On the (In)Security of ElGamal in OpenPGP

Communications of the ACM

Let G be a group and g G a generator. To create a key pair (sk, pk), pick a random integer x, compute the element X gx, and output (sk, pk): (x, X). Given pk, to encrypt a message M, pick an ephemeral random integer y, compute the elements Y gy and Z Xy gxy, and output C (C1, C2): (Y, M · Z) as the ciphertext. Given sk, to decrypt C, first recover element Z from C1 as per Z Yx gyx and then use C2, Z to recover M C2/Z. To instantiate the scheme, the following details have to be fixed: Which group G shall be used?


Robot injected in the skull spreads its tentacles to monitor the brain

New Scientist

The robot's soft legs are filled with sensors that measure brain activity A soft robot inserted through a tiny hole in the skull can deploy six sensor-filled legs on the surface of the brain. A version of this soft robot has been successfully tested in a miniature pig and could be scaled up for human testing in the future. The concept offers a less invasive approach for placing electrodes on the brain's surface compared with the traditional method, in which surgeons cut a hole in the skull the size of the fully extended device. If it proves safe and effective in humans, it could eventually help monitor and even treat people who experience epileptic seizures or other neurological disorders. "There's actually a really large surface area that you can reach without doing a large craniotomy," says Stéphanie Lacour at the Swiss Federal Institute of Technology in Lausanne.


Robot Talk Episode 47 – Helmut Hauser

Robohub

This short film documents some of the most innovative projects that emerged from the work of NCCR Robotics, the Swiss-wide consortium coordinated from 2010 to 2022 by EPFL professor Dario Floreano and ETHZ professor Robert Riener, including other major research institutions across Switzerland.


Movie clip reconstructed by an AI reading mice's brains as they watch

New Scientist

A mouse's brain activity may give some indication into what it is seeing A black-and-white movie has been extracted almost perfectly from the brain signals of mice using an artificial intelligence tool. Mackenzie Mathis at the Swiss Federal Institute of Technology Lausanne and her colleagues collected brain activity data from around 50 mice while they watched a 30-second movie clip nine times. The researchers then trained an AI to link this data to 600 frames of the clip, in which a man runs to a car and opens its boot. The data was previously collected by other researchers who inserted metal probes, which record electrical pulses from neurons, into the mice's primary visual cortexes, the area of the brain involved in processing visual information. Some brain activity data was also collected by imaging the mice's brains using a microscope. Next, Mathis and her team tested the ability of their trained AI to predict the order of frames within the clip using brain activity data that was collected from the mice as they watched the movie for the tenth time.


Watch a film through the eyes of a MOUSE: Scientists use AI to reconstruct its brain signals

Daily Mail - Science & tech

Have you ever struggled to describe something to your friend that you watched on TV last night? Soon, you might be able to project your mental images onto the big screen, as scientists have been doing so with mice. A team from École Polytechnique Fédérale de Lausanne (EPFL) developed an artificial intelligence (AI) tool that can interpret the rodents' brain signals. The algorithm, named CEBRA, was trained to map neural activity to specific frames in videos, so it could then predict and reconstruct what a mouse is looking at. The news comes shortly after researchers at the University of Texas at Austin used AI to turn people's thoughts into text in real-time.


Would you listen to AI-run radio? This station tested it out on listeners

ZDNet

ChatGPT and other artificial intelligence tools dominate headlines with speculation about how far generative AI can reach and what it can do -- and one radio station tested if the technology could replace its anchors and writers. The people behind Couleur 3, an inventive European radio station in Switzerland, broadcast AI-generated radio shows for 13 hours on April 27th, where the scripts were generated using ChatGPT and other AI text generators, the music program was created by algorithms, and the voices of five hosts were cloned by a movie production company. Listeners were reminded every 20 minutes that the programming was AI-generated and, yes, the reminder was also done with one of the cloned voices. For one day of programming, from 6am to 7pm, AI ran the show -- with the exclusion of news bulletins. For the reminder, a voice said: "Our voice clones and AI are here to unsettle, surprise, and shake you. And for that matter, this text was also written by a robot."


NCCR Robotics: A documentary

Robohub

This short film documents some of the most innovative projects that emerged from the work of NCCR Robotics, the Swiss-wide consortium coordinated from 2010 to 2022 by EPFL professor Dario Floreano and ETHZ professor Robert Riener, including other major research institutions across Switzerland. Shot over the course of six months in Lausanne, Geneva, Zurich, Wangen an der Aare, Leysin, Lugano, the documentary is a unique look at the state of the art of medical, educational and rescue robotics, and at the specific contributions that Swiss researchers have given to the field over the last decade. In addition to showing the robots in action, the film features extended interviews with top experts including Stéphanie Lacour, Silvestro Micera, Davide Scaramuzza, Robert Riener, Pierre Dillenbourg, Margarita Chli, Dario Floreano.


Variance Reduced Stochastic Gradient Descent with Neighbors

Neural Information Processing Systems

Stochastic Gradient Descent (SGD) is a workhorse in machine learning, yet its slow convergence can be a computational bottleneck. Variance reduction techniques such as SAG, SVRG and SAGA have been proposed to overcome this weakness, achieving linear convergence. However, these methods are either based on computations of full gradients at pivot points, or on keeping per data point corrections in memory. Therefore speed-ups relative to SGD may need a minimal number of epochs in order to materialize. This paper investigates algorithms that can exploit neighborhood structure in the training data to share and re-use information about past stochastic gradients across data points, which offers advantages in the transient optimization phase. As a side-product we provide a unified convergence analysis for a family of variance reduction algorithms, which we call memorization algorithms. We provide experimental results supporting our theory.


On the Global Linear Convergence of Frank-Wolfe Optimization Variants Martin Jaggi INRIA - SIERRA project-team Dept. of Computer Science École Normale Supérieure, Paris, France ETH Zürich, Switzerland

Neural Information Processing Systems

The Frank-Wolfe (FW) optimization algorithm has lately re-gained popularity thanks in particular to its ability to nicely handle the structured constraints appearing in machine learning applications. However, its convergence rate is known to be slow (sublinear) when the solution lies at the boundary. A simple lessknown fix is to add the possibility to take'away steps' during optimization, an operation that importantly does not require a feasibility oracle. In this paper, we highlight and clarify several variants of the Frank-Wolfe optimization algorithm that have been successfully applied in practice: away-steps FW, pairwise FW, fully-corrective FW and Wolfe's minimum norm point algorithm, and prove for the first time that they all enjoy global linear convergence, under a weaker condition than strong convexity of the objective. The constant in the convergence rate has an elegant interpretation as the product of the (classical) condition number of the function with a novel geometric quantity that plays the role of a'condition number' of the constraint set. We provide pointers to where these algorithms have made a difference in practice, in particular with the flow polytope, the marginal polytope and the base polytope for submodular optimization. The Frank-Wolfe algorithm [9] (also known as conditional gradient) is one of the earliest existing methods for constrained convex optimization, and has seen an impressive revival recently due to its nice properties compared to projected or proximal gradient methods, in particular for sparse optimization and machine learning applications. On the other hand, the classical projected gradient and proximal methods have been known to exhibit a very nice adaptive acceleration property, namely that the the convergence rate becomes linear for strongly convex objective, i.e. that the optimization error of the same algorithm after t iterations will decrease geometrically with O((1 ρ)