to

### Working with Sparse Matrix Factorization part1(Machine Learning)

Abstract: he problem of approximating a dense matrix by a product of sparse factors is a fundamental problem for many signal processing and machine learning tasks. It can be decomposed into two subproblems: finding the position of the non-zero coefficients in the sparse factors, and determining their values. While the first step is usually seen as the most challenging one due to its combinatorial nature, this paper focuses on the second step, referred to as sparse matrix approximation with fixed support. First, we show its NP-hardness, while also presenting a nontrivial family of supports making the problem practically tractable with a dedicated algorithm. Then, we investigate the landscape of its natural optimization formulation, proving the absence of spurious local valleys and spurious local minima, whose presence could prevent local optimization methods to achieve global optimality.

### 14 Loss functions you can use for Regression

In mathematical optimization and decision theory, a loss function or cost function (sometimes also called an error function) is a function that maps an event or values of one or more variables onto a real number intuitively representing some "cost" associated with the event. An optimization problem seeks to minimize a loss function. An objective function is either a loss function or its opposite (in specific domains, variously called a reward function, a profit function, a utility function, a fitness function, etc.), in which case it is to be maximized. The loss function could include terms from several levels of the hierarchy. The kind of loss function you are going to use depends on the kind of problem you are working i.e Regression or Classification.

### Startups Want to Help Airlines Prevent Tech Meltdowns

The meltdowns at Southwest and the FAA, just weeks apart, were because of weaknesses in systems scheduled for upgrades--underscoring the urgent need to give priority to efforts to modernize those systems, as well as the consequences of waiting to do so, the consultants said. While starting over wholesale with new information-technology infrastructure is likely unrealistic, consultants said, the sector should take advantage of cloud-based tools that can integrate the fire hose of real-time data driving airline operations. Newer, cloud-based infrastructure and databases can scale horizontally--meaning they can take advantage of distributed computing resources across the internet as needed. This design allows information to flow more freely, reducing the likelihood of glitches that cascade into systemwide shutdowns. Older, legacy systems are limited to the amount of computing power available.

### Mosaic Data Science Combats Climate Change & Accelerates ESG Efforts With Custom Artificial Intelligence & Machine Learning Solutions

LEESBURG, Va., Jan. 09, 2023 (GLOBE NEWSWIRE) -- Mosaic Data Science contributed machine learning algorithm development & deployment services to help a leading power firm automate the process of quantifying the switch to renewable energy portfolios from traditional energy sources while exploring the costs and tradeoffs of said offerings for their business-to-business customers. The solution is designed for enterprises that require power to a diverse set of business functions, such as industrial warehouses, production plants, and related physical infrastructure. The application relies on a highly scalable, custom mathematical optimization algorithm to select the products to eliminate or offset the emissions required to reach the GHG targets. Mosaic's data scientists collaborated with key stakeholders to lay out requirements for an interactive dashboard and the algorithms driving the portfolio recommendations. In the past, this had been a manual, error-prone, and time-consuming effort as sales personnel had to piece together a portfolio to cover energy usage across tens of thousands of service locations for a customer over a multi-decade window.

### Progress in Image Synthesis methods part2(Machine Learning + Computer Vision)

Abstract: Prior work has extensively studied the latent space structure of GANs for unconditional image synthesis, enabling global editing of generated images by the unsupervised discovery of interpretable latent directions. However, the discovery of latent directions for conditional GANs for semantic image synthesis (SIS) has remained unexplored. In this work, we specifically focus on addressing this gap. We propose a novel optimization method for finding spatially disentangled class-specific directions in the latent space of pretrained SIS models. We show that the latent directions found by our method can effectively control the local appearance of semantic classes, e.g., changing their internal structure, texture or color independently from each other.

### New Developments in Convex Optimization part2(Machine Learning)

Abstract: In this paper, we study randomized and cyclic coordinate descent for convex unconstrained optimization problems. We improve the known convergence rates in some cases by using the numerical semidefinite programming performance estimation method. Abstract:: Convex function constrained optimization has received growing research interests lately. For a special convex problem which has strongly convex function constraints, we develop a new accelerated primal-dual first-order method that obtains an $\Ocal(1/\sqrt{\vep})$ complexity bound, improving the $\Ocal(1/{\vep})$ result for the state-of-the-art first-order methods. The key ingredient to our development is some novel techniques to progressively estimate the strong convexity of the Lagrangian function, which enables adaptive step-size selection and faster convergence performance.

### Metaheuristic optimization with the Differential Evolution algorithm

Learn the theory of the Differential Evolution algorithm, its Python implementation and how and why it will surely help you in solving complex real-world optimization problems. This article has been written with Salvatore Guastella. Optimization is a pillar of data science. If you think about it, under the hood of each machine learning algorithms (ranging from basic linear regression to the most complex neural networks architectures), an optimization problem is solved. Moreover, in many real-world problems the goal is to find the values of one or more decision variables that minimize (or maximize) a quantity of interest while satisfying certain constraints. Few examples are given by portfolio optimization in finance, profit maximization of ad campaigns, energy efficiency in energy plants and shipment cost minimization in logistics (refer to this Medium article [1] in our Eni digiTALKS channel for an interesting example).

### Automated Dynamic Algorithm Configuration

The performance of an algorithm often critically depends on its parameter configuration. While a variety of automated algorithm configuration methods have been proposed to relieve users from the tedious and error-prone task of manually tuning parameters, there is still a lot of untapped potential as the learned configuration is static, i.e., parameter settings remain fixed throughout the run. However, it has been shown that some algorithm parameters are best adjusted dynamically during execution. Thus far, this is most commonly achieved through hand-crafted heuristics. A promising recent alternative is to automatically learn such dynamic parameter adaptation policies from data. In this article, we give the first comprehensive account of this new field of automated dynamic algorithm configuration (DAC), present a series of recent advances, and provide a solid foundation for future research in this field. Specifically, we (i) situate DAC in the broader historical context of AI research; (ii) formalize DAC as a computational problem; (iii) identify the methods used in prior art to tackle this problem; and (iv) conduct empirical case studies for using DAC in evolutionary optimization, AI planning, and machine learning.

### The Grey Wolf Optimizer - Teaching & Academics

Search Algorithms and Optimization techniques are the engines of most Artificial Intelligence techniques and Data Science. There is no doubt that the Grey Wolf Optimizer is one of the most recent, well-regarded and widely-used AI search techniques. A lot of scientists and practitioners use search and optimization algorithms without understanding their internal structure. However, understanding the internal structure and mechanism of such AI problem-solving techniques will allow them to solve problems more efficiently. This also allows them to tune, tweak, and even design new algorithms for different projects.

### How Distributed Optimization operates part1

Abstract: rivacy protection has become an increasingly pressing requirement in distributed optimization. However, equipping distributed optimization with differential privacy, the state-of-the-art privacy protection mechanism, will unavoidably compromise optimization accuracy. In this paper, we propose an algorithm to achieve rigorous ε-differential privacy in gradient-tracking based distributed optimization with enhanced optimization accuracy. More specifically, to suppress the influence of differential-privacy noise, we propose a new robust gradient-tracking based distributed optimization algorithm that allows both stepsize and the variance of injected noise to vary with time. Then, we establish a new analyzing approach that can characterize the convergence of the gradient-tracking based algorithm under both constant and time-varying stespsizes.