Plotting

 Europe


Kernel functions based on triplet comparisons

Neural Information Processing Systems

Given only information in the form of similarity triplets "Object A is more similar to object B than to object C" about a data set, we propose two ways of defining a kernel function on the data set. While previous approaches construct a lowdimensional Euclidean embedding of the data set that reflects the given similarity triplets, we aim at defining kernel functions that correspond to high-dimensional embeddings. These kernel functions can subsequently be used to apply any kernel method to the data set.


Inverse Filtering for Hidden Markov Models

Neural Information Processing Systems

This paper considers a number of related inverse filtering problems for hidden Markov models (HMMs). In particular, given a sequence of state posteriors and the system dynamics; i) estimate the corresponding sequence of observations, ii) estimate the observation likelihoods, and iii) jointly estimate the observation likelihoods and the observation sequence. We show how to avoid a computationally expensive mixed integer linear program (MILP) by exploiting the algebraic structure of the HMM filter using simple linear algebra operations, and provide conditions for when the quantities can be uniquely reconstructed. We also propose a solution to the more general case where the posteriors are noisily observed. Finally, the proposed inverse filtering algorithms are evaluated on real-world polysomnographic data used for automatic sleep segmentation.


If Ted Talks are getting shorter, what does that say about our attention spans?

The Guardian

Age: Ted started in 1984. And has Ted been talking ever since? I know, and they do the inspirational online talks. Correct, under the slogan "Ideas change everything". She was talking at the Hay festival, in Wales.


Drone war, ground offensive continue despite new Russia-Ukraine peace push

Al Jazeera

Russia and Ukraine have launched a wave of drone attacks against each other overnight, even as Moscow claimed it was finalising a peace proposal to end the war. Ukrainian air force officials said on Tuesday that Russia deployed 60 drones across multiple regions through the night, injuring 10 people. Kyiv's air defences intercepted 43 of them – 35 were shot down while eight were diverted using electronic warfare systems. In Dnipropetrovsk, central Ukraine, Governor Serhiy Lysak reported damage to residential properties and an agricultural site after Russian drones led to fires during the night. In Kherson, a southern city frequently hit by Russian strikes, a drone attack on Tuesday morning wounded a 59-year-old man and six municipal workers, officials said.



Optimization for Approximate Submodularity

Neural Information Processing Systems

We consider the problem of maximizing a submodular function when given access to its approximate version. Submodular functions are heavily studied in a wide variety of disciplines since they are used to model many real world phenomena and are amenable to optimization. There are many cases however in which the phenomena we observe is only approximately submodular and the optimization guarantees cease to hold. In this paper we describe a technique that yields strong guarantees for maximization of monotone submodular functions from approximate surrogates under cardinality and intersection of matroid constraints. In particular, we show tight guarantees for maximization under a cardinality constraint and 1/(1 + P) approximation under intersection of P matroids.


Delivery robot autonomously lifts, transports heavy cargo

FOX News

Tech expert Kurt Knutsson discusses LEVA, the autonomous robot that walks, rolls and lifts 187 pounds of cargo for all-terrain deliveries. Autonomous delivery robots are already starting to change the way goods move around cities and warehouses, but most still need humans to load and unload their cargo. That's where LEVA comes in. Developed by engineers and designers from ETH Zurich and other Swiss universities, LEVA is a robot that can not only navigate tricky environments but also lift and carry heavy boxes all on its own, making deliveries smoother and more efficient. Join the FREE "CyberGuy Report": Get my expert tech tips, critical security alerts and exclusive deals, plus instant access to my free "Ultimate Scam Survival Guide" when you sign up!


L4: Practical loss-based stepsize adaptation for deep learning

Neural Information Processing Systems

We propose a stepsize adaptation scheme for stochastic gradient descent. It operates directly with the loss function and rescales the gradient in order to make fixed predicted progress on the loss. We demonstrate its capabilities by conclusively improving the performance of Adam and Momentum optimizers. The enhanced optimizers with default hyperparameters consistently outperform their constant stepsize counterparts, even the best ones, without a measurable increase in computational cost. The performance is validated on multiple architectures including dense nets, CNNs, ResNets, and the recurrent Differential Neural Computer on classical datasets MNIST, fashion MNIST, CIFAR10 and others.



Revisiting $(\epsilon, \gamma, \tau)$-similarity learning for domain adaptation

Neural Information Processing Systems

Similarity learning is an active research area in machine learning that tackles the problem of finding a similarity function tailored to an observable data sample in order to achieve efficient classification. This learning scenario has been generally formalized by the means of a (ɛ, γ, τ) good similarity learning framework in the context of supervised classification and has been shown to have strong theoretical guarantees. In this paper, we propose to extend the theoretical analysis of similarity learning to the domain adaptation setting, a particular situation occurring when the similarity is learned and then deployed on samples following different probability distributions. We give a new definition of an (ɛ, γ) good similarity for domain adaptation and prove several results quantifying the performance of a similarity function on a target domain after it has been trained on a source domain. We particularly show that if the source distribution dominates the target one, then principally new domain adaptation learning bounds can be proved.