distance
Tree Mover's Distance: Bridging Graph Metrics and Stability of Graph Neural Networks
Understanding generalization and robustness of machine learning models fundamentally relies on assuming an appropriate metric on the data space. Identifying such a metric is particularly challenging for non-Euclidean data such as graphs. Here, we propose a pseudometric for attributed graphs, the Tree Mover's Distance (TMD), and study its relation to generalization. Via a hierarchical optimal transport problem, TMD reflects the local distribution of node attributes as well as the distribution of local computation trees, which are known to be decisive for the learning behavior of graph neural networks (GNNs). First, we show that TMD captures properties relevant for graph classification: a simple TMD-SVM can perform competitively with standard GNNs. Second, we relate TMD to generalization of GNNs under distribution shifts, and show that it correlates well with performance drop under such shifts.
- South America > Brazil (0.04)
- North America > United States (0.04)
- Asia > India (0.04)
- (47 more...)
Reviews: Supervised Word Mover's Distance
Overall the paper reads like a nice combination of existing tricks, and provides very convincing experimental results. Strengths of the paper are simplicity and a relatively straightforward idea, but not trivial to implement/test. The experimental section is therefore a strong part of this paper. Things to improve: handle better the interplay between regularized/not regularized formulations, be more rigorous with maths (computations/notations are a bit sloppy) and ideally provide an algorithmic box to see more clearly into what the authors propose. A few minor comments: - In Eq.1, the Euclidean distance between word embeddings is used as a cost, in Eq.6, for the purpose of Malahanobis metric learning, that cost becomes the squared euclidean metric (and thus what is usually referred to as 2-Wasserstein).
- North America > United States > Indiana > Boone County > Lebanon (0.07)
- Asia > Middle East > Lebanon (0.07)
Deep Reinforcement Learning Enabled Persistent Surveillance with Energy-Aware UAV-UGV Systems for Disaster Management Applications
Mondal, Md Safwan, Ramasamy, Subramanian, Bhounsule, Pranav
Integrating Unmanned Aerial Vehicles (UAVs) with Unmanned Ground Vehicles (UGVs) provides an effective solution for persistent surveillance in disaster management. UAVs excel at covering large areas rapidly, but their range is limited by battery capacity. UGVs, though slower, can carry larger batteries for extended missions. By using UGVs as mobile recharging stations, UAVs can extend mission duration through periodic refueling, leveraging the complementary strengths of both systems. To optimize this energy-aware UAV-UGV cooperative routing problem, we propose a planning framework that determines optimal routes and recharging points between a UAV and a UGV. Our solution employs a deep reinforcement learning (DRL) framework built on an encoder-decoder transformer architecture with multi-head attention mechanisms. This architecture enables the model to sequentially select actions for visiting mission points and coordinating recharging rendezvous between the UAV and UGV. The DRL model is trained to minimize the age periods (the time gap between consecutive visits) of mission points, ensuring effective surveillance. We evaluate the framework across various problem sizes and distributions, comparing its performance against heuristic methods and an existing learning-based model. Results show that our approach consistently outperforms these baselines in both solution quality and runtime. Additionally, we demonstrate the DRL policy's applicability in a real-world disaster scenario as a case study and explore its potential for online mission planning to handle dynamic changes. Adapting the DRL policy for priority-driven surveillance highlights the model's generalizability for real-time disaster response.
- North America > United States > Illinois > Cook County > Chicago (0.04)
- North America > United States > Texas (0.04)
- Asia > Middle East > Republic of Türkiye > Aksaray Province > Aksaray (0.04)
- (9 more...)
- Transportation (1.00)
- Government > Military (0.88)
- Aerospace & Defense (0.87)
- (4 more...)
- Information Technology > Artificial Intelligence > Robots > Autonomous Vehicles > Drones (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Reinforcement Learning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.88)
Reviews: Adversarial Text Generation via Feature-Mover's Distance
The authors introduce a new variation of GAN that is claimed to be suitable for text generation. The proposed method relies on a new optimal transport–based distance metric on the feature space learned by the "discriminator". The idea is sound and seems to be novel. The text is well written and easy to follow. Overall, I like the ideas in the paper but I think that the experiments are not robust, which makes it difficult to judge if the current method represents a real advance over the previous GAN models for text generation. Some questions/comments about the experiments: (1) For the generic text generation, why not using datasets that have been used in other works: Penn Treebank, IMDB? (2) For generic text generation why the authors have not compared their results with MaskGAN?
Model Parameters and Hyperparameters in Machine Learning -- What is the difference?
For example, suppose you want to build a simple linear regression model using an m-dimensional training data set. If the model uses the gradient descent algorithm to minimize the objective function in order to determine the weights w_0, w_1, w_2, …,w_m, then we can have an optimizer such as GradientDescent(eta, n_iter). Here eta (learning rate) and n_iter (number of iterations) are the hyperparameters that would have to be adjusted in order to obtain the best values for the model parameters w_0, w_1, w_2, …,w_m. For more information about this, see the following example: Machine Learning: Python Linear Regression Estimator Using Gradient Descent. Here, n_iter is the number of iterations, eta0 is the learning rate, and random_state is the seed of the pseudo random number generator to use when shuffling the data.
Model Parameters and Hyperparameters in Machine Learning -- What is the difference? - WebSystemer.no
For example, suppose you want to build a simple linear regression model using an m-dimensional training data set. If the model uses the gradient descent algorithm to minimize the objective function in order to determine the weights w_0, w_1, w_2, …,w_m, then we can have an optimizer such as GradientDescent(eta, n_iter). Here eta (learning rate) and n_iter (number of iterations) are the hyperparameters that would have to be adjusted in order to obtain the best values for the model parameters w_0, w_1, w_2, …,w_m. For more information about this, see the following example: Machine Learning: Python Linear Regression Estimator Using Gradient Descent. Here, n_iter is the number of iterations, eta0 is the learning rate, and eed of the pseudo random number generator to use when shuffling the data.
Comparing Distance Measurements with Python and SciPy
At the core of cluster analysis is the concept of measuring distances between a variety of different data point dimensions. For example, when considering k-means clustering, there is a need to measure a) distances between individual data point dimensions and the corresponding cluster centroid dimensions of all clusters, and b) distances between cluster centroid dimensions and all resulting cluster member data point dimensions. While k-means, the simplest and most prominent clustering algorithm, generally uses Euclidean distance as its similarity distance measurement, contriving innovative or variant clustering algorithms which, among other alterations, utilize different distance measurements is not a stretch. It is thus a judgment of orientation and not magnitude: two vectors with the same orientation have a cosine similarity of 1, two vectors at 90 have a similarity of 0, and two vectors diametrically opposed have a similarity of -1, independent of their magnitude.
?utm_campaign=Feed%3A+Mashable+%28Mashable%29&utm_cid=Mash-Prod-RSS-Feedburner-All-Partial&utm_source=feedburner&utm_medium=feed
I had been testing out Vi, a set of $249 Bluetooth running headphones with its own built-in AI assistant and biometric tracking features. After a convoluted series of events in which I was offered a potentially illegal entry to the Brooklyn Half Marathon a week before the race, I found my adventure: I decided to run my own 13.1 miles in the Prospect Park Loop with nothing but the AI headphones to guide me, using Vi for a crash training course to prep in less than a week. Vi doesn't offer much more than other running apps I've used: It tracks the distance you run, measures your heart rate, and offers some realtime coaching direction to fine-tune your step rate to find your ideal pace, which it calls your "Comfort Zone," -- but it leaves much to be desired as a next-gen personal trainer. It currently has no dedicated feature to set specific goals, so users prepping for races like me have no guide to train for big events or set more defined goals than just fine-tuning their running style.
- Health & Medicine (0.73)
- Leisure & Entertainment > Sports > Running (0.63)
- Information Technology (0.49)
SheffieldML/vargplvm
This repository contains both MATLAB and R code for implementing the Bayesian GP-LVM. The MATLAB code is in the subdirectory vargplvm, the R code in vargplvmR. For a quick description and sample videos / demos check: http://git.io/A3Uv The Bayesian GP-LVM (Titsias and Lawrence, 2010) is an extension of the traditional GP-LVM where the latent space is approximately marginalised out in a variational fashion (hence the prefix'vargplvm'). Let us denote \mathbf{Y} as a matrix of observations (here called outputs) with dimensions n \times p, where n rows correspond to datapoints and p columns to dimensions.
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.06)
- Europe > United Kingdom (0.05)