Goto

Collaborating Authors

 backward


Local Linear Convergence of Forward--Backward under Partial Smoothness

Neural Information Processing Systems

In this paper, we consider the Forward--Backward proximal splitting algorithm to minimize the sum of two proper closed convex functions, one of which having a Lipschitz continuous gradient and the other being partly smooth relatively to an active manifold $\mathcal{M}$. We propose a generic framework in which we show that the Forward--Backward (i) correctly identifies the active manifold $\mathcal{M}$ in a finite number of iterations, and then (ii) enters a local linear convergence regime that we characterize precisely. This gives a grounded and unified explanation to the typical behaviour that has been observed numerically for many problems encompassed in our framework, including the Lasso, the group Lasso, the fused Lasso and the nuclear norm regularization to name a few. These results may have numerous applications including in signal/image processing processing, sparse recovery and machine learning.


Local Linear Convergence of Forward--Backward under Partial Smoothness

Neural Information Processing Systems

In this paper, we consider the Forward--Backward proximal splitting algorithm to minimize the sum of two proper closed convex functions, one of which having a Lipschitz continuous gradient and the other being partly smooth relatively to an active manifold \mathcal{M} . We propose a generic framework in which we show that the Forward--Backward (i) correctly identifies the active manifold \mathcal{M} in a finite number of iterations, and then (ii) enters a local linear convergence regime that we characterize precisely. This gives a grounded and unified explanation to the typical behaviour that has been observed numerically for many problems encompassed in our framework, including the Lasso, the group Lasso, the fused Lasso and the nuclear norm regularization to name a few. These results may have numerous applications including in signal/image processing processing, sparse recovery and machine learning.


Local Linear Convergence of Forward--Backward under Partial Smoothness

Liang, Jingwei, Fadili, Jalal, Peyré, Gabriel

Neural Information Processing Systems

In this paper, we consider the Forward--Backward proximal splitting algorithm to minimize the sum of two proper closed convex functions, one of which having a Lipschitz continuous gradient and the other being partly smooth relatively to an active manifold $\mathcal{M}$. We propose a generic framework in which we show that the Forward--Backward (i) correctly identifies the active manifold $\mathcal{M}$ in a finite number of iterations, and then (ii) enters a local linear convergence regime that we characterize precisely. This gives a grounded and unified explanation to the typical behaviour that has been observed numerically for many problems encompassed in our framework, including the Lasso, the group Lasso, the fused Lasso and the nuclear norm regularization to name a few. These results may have numerous applications including in signal/image processing processing, sparse recovery and machine learning. Papers published at the Neural Information Processing Systems Conference.


Mac made intelligent

AI Magazine

I should like to lodge a complaint about your editorial standards in the article "An Assessment of Tools for Building Large KB Systems," by William Mettrey, in the winter 1987 [volume 9 number As a primary architect of CRL-Ops and a former KnowledgeCraft class instructor, I had to deal with the general public's misconceptions about forward versus backward chaining systems. Mr. Mettrey's article, in my opinion, is the type which generates the confusion that forward chaining rule systems cannot "backwards chain." This nonsensical view was held by the vast majority of our customers in the KC class. The section on Rule-Based inference implies that backward chaining is done only by Prolog in KC with its statement "by contrast, Knowledge-Craft implements backward chaining by supporting a version of Prolog." Any forward chaining rules system can efficiently implement constrained backward chaining by simply using a goal structure to search for the required knowledge.


1475

AI Magazine

SP2.0 is a new version of the heuristic HSP2.0 can specify the state space to be The syntax for the instance and domain files is given by the PDDL standard. HSP2.0, the search can be done either forward For example, it can tell HSP2.0 to HSP2.0 ran three options concurrently for three The choice for the three-minute threshold was empirical, following the observation that the regression search most often solved problems quickly or didn't solve them at all. The states s S are collections of atoms from A. S2. The actions a A(s) are the operators op O such that Prec(op) s. The transition function f maps states s into states s′ s - Del(a) Add(a) for a A(s).


Thinking Backward for Knowledge Acquisition

Schachter, Ross D., Heckerman, David

AI Magazine

This article examines the direction in which knowledge bases are constructed for diagnosis and decision making. When building an expert system, it is traditional to elicit knowledge from an expert in the direction in which the knowledge is to be applied, namely, from observable evidence toward unobservable hypotheses. Therefore, we argue that a knowledge base be constructed following the expert's natural reasoning direction, and then reverse the direction for use. This choice of representation direction facilitates knowledge acquisition in deterministic domains and is essential when a problem involves uncertainty.