author
A Meta-Heuristic Load Balancer for Cloud Computing Systems
Sliwko, Leszek, Getov, Vladimir
This is the accepted author's version of the paper. The final published version is available in the 2015 IEEE 39th Annual Com puter Software and Applications Conference, vol. Abstract -- This paper presents a strategy to allocate services on a Cloud system without overloading nodes and maintaining the system stability with minimum cost. We specify an abstract model of cloud resources utilization, including multiple types of resources as well as consideration s for the service migration costs. A prototype meta - heuristic load balancer is demonstrated and experiment al results are presented and discussed. We also propose a novel genetic algorithm, wher e population is seeded with the outputs of other meta - heuristic algorithms. Modern day applications are often designed in such a way that they can simultaneously use resources from different computer environments. System components are not just properties of individual machines and in many respects they can be viewed as though the y are deployed in a single application environment. Distributed computing differs from traditional computing in many ways.
- North America > United States > California > Alameda County > Berkeley (0.04)
- Europe > United Kingdom > England > Greater London > London (0.04)
Prior-Aligned Meta-RL: Thompson Sampling with Learned Priors and Guarantees in Finite-Horizon MDPs
Zhou, Runlin, Chen, Chixiang, Chen, Elynn
We study meta-reinforcement learning in finite-horizon MDPs where related tasks share similar structures in their optimal action-value functions. Specifically, we posit a linear representation $Q^*_h(s,a)=Φ_h(s,a)\,θ^{(k)}_h$ and place a Gaussian meta-prior $ \mathcal{N}(θ^*_h,Σ^*_h)$ over the task-specific parameters $θ^{(k)}_h$. Building on randomized value functions, we propose two Thompson-style algorithms: (i) MTSRL, which learns only the prior mean and performs posterior sampling with the learned mean and known covariance; and (ii) $\text{MTSRL}^{+}$, which additionally estimates the covariance and employs prior widening to control finite-sample estimation error. Further, we develop a prior-alignment technique that couples the posterior under the learned prior with a meta-oracle that knows the true prior, yielding meta-regret guarantees: we match prior-independent Thompson sampling in the small-task regime and strictly improve with more tasks once the prior is learned. Concretely, for known covariance we obtain $\tilde{O}(H^{4}S^{3/2}\sqrt{ANK})$ meta-regret, and with learned covariance $\tilde{O}(H^{4}S^{3/2}\sqrt{AN^3K})$; both recover a better behavior than prior-independent after $K \gtrsim \tilde{O}(H^2)$ and $K \gtrsim \tilde{O}(N^2H^2)$, respectively. Simulations on a stateful recommendation environment (with feature and prior misspecification) show that after brief exploration, MTSRL/MTSRL\(^+\) track the meta-oracle and substantially outperform prior-independent RL and bandit-only meta-baselines. Our results give the first meta-regret guarantees for Thompson-style RL with learned Q-priors, and provide practical recipes (warm-start via RLSVI, OLS aggregation, covariance widening) for experiment-rich settings.
- Asia > Middle East > Jordan (0.04)
- North America > United States > New York (0.04)
- North America > United States > Maryland > Baltimore (0.04)
- (2 more...)
Navigation Pixie: Implementation and Empirical Study Toward On-demand Navigation Agents in Commercial Metaverse
Yanagawa, Hikari, Hiroi, Yuichi, Tokida, Satomi, Hatada, Yuji, Hiraki, Takefumi
While commercial metaverse platforms offer diverse user-generated content, they lack effective navigation assistance that can dynamically adapt to users' interests and intentions. Although previous research has investigated on-demand agents in controlled environments, implementation in commercial settings with diverse world configurations and platform constraints remains challenging. We present Navigation Pixie, an on-demand navigation agent employing a loosely coupled architecture that integrates structured spatial metadata with LLM-based natural language processing while minimizing platform dependencies, which enables experiments on the extensive user base of commercial metaverse platforms. Our cross-platform experiments on commercial metaverse platform Cluster with 99 PC client and 94 VR-HMD participants demonstrated that Navigation Pixie significantly increased dwell time and free exploration compared to fixed-route and no-agent conditions across both platforms. Subjective evaluations revealed consistent on-demand preferences in PC environments versus context-dependent social perception advantages in VR-HMD. This research contributes to advancing VR interaction design through conversational spatial navigation agents, establishes cross-platform evaluation methodologies revealing environment-dependent effectiveness, and demonstrates empirical experimentation frameworks for commercial metaverse platforms.
- North America > United States (0.05)
- Asia > Japan > Honshū > Kantō > Ibaraki Prefecture > Tsukuba (0.04)
- South America > Colombia > Meta Department > Villavicencio (0.04)
- (2 more...)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Agents (1.00)
- Information Technology > Human Computer Interaction > Interfaces > Virtual Reality (0.95)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Personal Assistant Systems (0.93)
- (2 more...)
Review for NeurIPS paper: RATT: Recurrent Attention to Transient Tasks for Continual Image Captioning
Strengths: The paper is one of the first to study continual learning in recurrent settings and shows promising performance on the image captioning task. It proposes RATT, a novel approach for recurrent continual learning based on attentional masking, inspired by the previous HAT method. In its proposed method, three masks (a_x, a_h, and a_s) to embedding, hidden state, and vocabulary are introduced, and in its ablation study, the paper shows that all these three components are helpful to the final continual learning performance. In addition to the proposed novel approach, the paper also explores adapting weight regularization and knowledge distillation-based approaches to the recurrent continual learning problem. In its experiments, the paper shows strong results, largely outperforming simple baselines (such as fine-tuning) and previous regularization or distillation-based approaches (EWC and LwF).
Gerry Adams considers suing Meta over alleged use of his books to train AI
The former Sinn Féin president Gerry Adams is considering legal action against Meta because it may have used his books to train artificial intelligence. "Meta has used many of my books without my permission. I have placed the issue in the hands of my solicitor," he said. Sinn Féin said in a statement on Wednesday that the titles included its former leader's autobiography, Before the Dawn; a prison memoir, Cage Eleven; reflections on Northern Ireland's peace process, Hope and History; and other memoirs, a cookbook and a short story collection. Adams is the latest author to join a backlash against the parent company of Facebook, Instagram and WhatsApp.
- Law (1.00)
- Information Technology > Services (0.73)
'Meta has stolen books': authors to protest in London against AI trained using 'shadow library'
Novelists Kate Mosse and Tracy Chevalier as well as poet and former Royal Society of Literature chair Daljit Nagra will be among those in attendance outside the company's King's Cross office. Protesters will meet at Granary Square at 1.30pm and a letter to Meta from the Society of Authors (SoA) will be hand-delivered at 1.45pm. It will also be sent to Meta headquarters in the US. Earlier this year, a US court filing alleged that Meta CEO Mark Zuckerberg approved the company's use of a notorious "shadow library", LibGen, which contains more than 7.5 million books. Last month, the Atlantic republished a searchable database of the titles contained in LibGen, through which many authors discovered their works may have been used to train Meta's AI models.
- Law (0.79)
- Media > Publishing (0.41)
Optimizing Input Data Collection for Ranking and Selection
We study a ranking and selection (R&S) problem when all solutions share common parametric Bayesian input models updated with the data collected from multiple independent data-generating sources. Our objective is to identify the best system by designing a sequential sampling algorithm that collects input and simulation data given a budget. We adopt the most probable best (MPB) as the estimator of the optimum and show that its posterior probability of optimality converges to one at an exponential rate as the sampling budget increases. Assuming that the input parameters belong to a finite set, we characterize the $\epsilon$-optimal static sampling ratios for input and simulation data that maximize the convergence rate. Using these ratios as guidance, we propose the optimal sampling algorithm for R&S (OSAR) that achieves the $\epsilon$-optimal ratios almost surely in the limit. We further extend OSAR by adopting the kernel ridge regression to improve the simulation output mean prediction. This not only improves OSAR's finite-sample performance, but also lets us tackle the case where the input parameters lie in a continuous space with a strong consistency guarantee for finding the optimum. We numerically demonstrate that OSAR outperforms a state-of-the-art competitor.
- North America > United States > New Jersey > Middlesex County > Piscataway (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Asia > China > Hong Kong (0.04)
- Information Technology > Modeling & Simulation (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Optimization (0.46)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Uncertainty > Bayesian Inference (0.45)
- Information Technology > Artificial Intelligence > Machine Learning > Learning Graphical Models > Directed Networks > Bayesian Learning (0.45)
Review for NeurIPS paper: Implicit Regularization in Deep Learning May Not Be Explainable by Norms
Summary and Contributions: Reconstruction of a low-rank matrix from its linear measurements is a canonical problem in machine learning and signal processing. There has been an intense effort to establish theoretical guarantees and design efficient algorithms for solving these problems. Of these, the most prominent two methods are: 1- Convex optimization approach - Nuclear-norm regularization. In particular, the non-convex factorization approach has received increasing attention due to the reduced arithmetic and storage costs. Recently, Gunasekar et al. (2017) reported a surprising observation, that the non-convex factorization approach (when solved with gradient descent) generalizes (i.e., recovers the low-rank matrix of interest) even when the factors U and V are full dimensional (i.e., not tall, hence UV' does not impose an explicit low-rank structure).
Review for NeurIPS paper: Explaining Naive Bayes and Other Linear Classifiers with Polynomial Time and Delay
Additional Feedback: It would be interesting to see a discussion of how this work lies in comparison to classes of knowledge bases that enable tractable abductive reasoning [1]. For example, is this result a special case of some known class/language? I just wanted to address the author's request for specific references "that might cast doubt on the novelty of our work". Sorry for not being more concrete, but here are some specific references. David Eppstein The polynomial time enumeration algorithm proposed for Eq 16 is basically subset sum where we enumerate all subsets that sum less than some threshold.
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning (0.40)
- Information Technology > Artificial Intelligence > Machine Learning > Performance Analysis > Accuracy (0.40)
- Information Technology > Artificial Intelligence > Machine Learning > Learning Graphical Models > Directed Networks > Bayesian Learning (0.40)
Review for NeurIPS paper: Using noise to probe recurrent neural network structure and prune synapses
Additional Feedback: I provided these comments in the Discussion to argue for its acceptance: I found the authors' responses addressed the issues of "symmetric connections" and "biological plausibility" reasonably well. Both the reviewers who gave "5" agreed that the theoretical derivation is correct. They mostly questioned the biological plausibility or applicability. While symmetric connections are not necessarily biological plausible, many important models and theoretical analysis, for example the works of Hopfield, Sompolinsky etc, often made such simplifying assumptions and produced works that have been influential in theoretical neuroscience at the end. I liked the paper because the idea is interesting, novel and innovative, that the learning and pruning can be local using noise as probe has not been proposed or explored before.