Goto

Collaborating Authors

 saul


Better Call SAUL: Fluent and Consistent Language Model Editing with Generation Regularization

Wang, Mingyang, Lange, Lukas, Adel, Heike, Strötgen, Jannik, Schütze, Hinrich

arXiv.org Artificial Intelligence

To ensure large language models contain up-to-date knowledge, they need to be updated regularly. However, model editing is challenging as it might also affect knowledge that is unrelated to the new data. State-of-the-art methods identify parameters associated with specific knowledge and then modify them via direct weight updates. However, these locate-and-edit methods suffer from heavy computational overhead and lack theoretical validation. In contrast, directly fine-tuning the model on requested edits affects the model's behavior on unrelated knowledge, and significantly damages the model's generation fluency and consistency. To address these challenges, we propose SAUL, a streamlined model editing method that uses sentence concatenation with augmented random facts for generation regularization. Evaluations on three model editing benchmarks show that SAUL is a practical and reliable solution for model editing outperforming state-of-the-art methods while maintaining generation quality and reducing computational overhead.


Accelerated Algorithms for Nonlinear Matrix Decomposition with the ReLU function

Seraghiti, Giovanni, Awari, Atharva, Vandaele, Arnaud, Porcelli, Margherita, Gillis, Nicolas

arXiv.org Artificial Intelligence

In this paper, we study the following nonlinear matrix decomposition (NMD) problem: given a sparse nonnegative matrix $X$, find a low-rank matrix $\Theta$ such that $X \approx f(\Theta)$, where $f$ is an element-wise nonlinear function. We focus on the case where $f(\cdot) = \max(0, \cdot)$, the rectified unit (ReLU) non-linear activation. We refer to the corresponding problem as ReLU-NMD. We first provide a brief overview of the existing approaches that were developed to tackle ReLU-NMD. Then we introduce two new algorithms: (1) aggressive accelerated NMD (A-NMD) which uses an adaptive Nesterov extrapolation to accelerate an existing algorithm, and (2) three-block NMD (3B-NMD) which parametrizes $\Theta = WH$ and leads to a significant reduction in the computational cost. We also propose an effective initialization strategy based on the nuclear norm as a proxy for the rank function. We illustrate the effectiveness of the proposed algorithms (available on gitlab) on synthetic and real-world data sets.


Kordjamshidi

AAAI Conferences

We present Saul, a new probabilistic programming language designed to address some of the shortcomings of programming languages that aim at advancing and simplifying the development of AI systems. Such languages need to interact with messy, naturally occurring data, to allow a programmer to specify what needs to be done at an appropriate level of abstraction rather than at the data level, to be developed on a solid theory that supports moving to and reasoning at this level of abstraction and, finally, to support flexible integration of these learning and inference models within an application program. Saul is an object-functional programming language written in Scala that facilitates these by (1) allowing a programmer to learn, name and manipulate named abstractions over relational data; (2) supporting seamless incorporation of trainable (probabilistic or discriminative) components into the program, and (3) providing a level of inference over trainable models to support composition and make decisions that respect domain and application constraints. Saul is developed over a declaratively defined relational data model, can use piecewise learned factor graphs with declaratively specified learning and inference objectives, and it supports inference over probabilistic models augmented with declarative knowledge-based constraints.We describe the key constructs of Saul and exemplify its use in developing applications that require relational feature engineering and structured output prediction.


Unified Framework for Spectral Dimensionality Reduction, Maximum Variance Unfolding, and Kernel Learning By Semidefinite Programming: Tutorial and Survey

Ghojogh, Benyamin, Ghodsi, Ali, Karray, Fakhri, Crowley, Mark

arXiv.org Machine Learning

This is a tutorial and survey paper on unification of spectral dimensionality reduction methods, kernel learning by Semidefinite Programming (SDP), Maximum Variance Unfolding (MVU) or Semidefinite Embedding (SDE), and its variants. We first explain how the spectral dimensionality reduction methods can be unified as kernel Principal Component Analysis (PCA) with different kernels. This unification can be interpreted as eigenfunction learning or representation of kernel in terms of distance matrix. Then, since the spectral methods are unified as kernel PCA, we say let us learn the best kernel for unfolding the manifold of data to its maximum variance. We first briefly introduce kernel learning by SDP for the transduction task. Then, we explain MVU in detail. Various versions of supervised MVU using nearest neighbors graph, by class-wise unfolding, by Fisher criterion, and by colored MVU are explained. We also explain out-of-sample extension of MVU using eigenfunctions and kernel mapping. Finally, we introduce other variants of MVU including action respecting embedding, relaxed MVU, and landmark MVU for big data.


How Virgin Holidays is using AI to improve email marketing ROI

#artificialintelligence

Tone of voice is key for a brand like Virgin Holidays, especially when it comes to grabbing the attention of email subscribers. Previously, the company has relied on copywriters to encapsulate its quirky and adventurous attitude. However, with email campaigns taking up extensive amounts of time and resources (with little pay-off) – the strategy was failing to work. I recently heard from Saul Lopez, Customer Lifecycle Lead at Virgin Holidays, about the brand's decision to bring AI into the mix, specifically to optimise subject lines using AI marketing technology from Phrasee. Here's more on the reasons why this approach has turned around the travel brand's email strategy, plus a few general benefits of using AI.


Cocktails Inspired By Emmy Nominated Dramas: 'Mr. Robot,' 'Game of Thrones,' 'House of Cards'

International Business Times

As the stars get ready to rock the red carpet for the Emmys this Sunday, those of us who lost our invites in the mail are planning our viewing parties, and obviously no event is complete without themed drinks. Our friends at Drizly.com took a look at this year's nominated dramas and paired cocktails with some of our favorite shows. Check out these delicious drama cocktail recipes to try out on Emmys Night below. Inspired By Netflix's series "House Of Cards" Photo: Netflix Peach & Blackberry Infused Bourbon: Inspired By "House Of Cards" Frank Underwood's drink of choice is bourbon, neat (Blanton's, to be exact). We're big fans of homemade infusions, so we got a little creative and added peach and blackberry to pay homage to Frank's Southern roots.


AI Can Recognize Your Face Even If You're Pixelated

WIRED

Pixelation has long been a familiar fig leaf to cover our visual media's most private parts. Blurred chunks of text or obscured faces and license plates show up on the news, in redacted documents, and online. The technique is nothing fancy, but it has worked well enough, because people can't see or read through the distortion. The problem, however, is that humans aren't the only image recognition masters around anymore. As computer vision becomes increasingly robust, it's starting to see things we can't.


Saul: Towards Declarative Learning Based Programming

Kordjamshidi, Parisa (University of Illinois at Urbana-Champaign) | Roth, Dan (University of Illinois at Urbana-Champaign) | Wu, Hao (University of Illinois at Urbana-Champaign)

AAAI Conferences

We present Saul, a new probabilistic programming language designed to address some of the shortcomings of programming languages that aim at advancing and simplifying the development of AI systems. Such languages need to interact with messy, naturally occurring data, to allow a programmer to specify what needs to be done at an appropriate level of abstraction rather than at the data level, to be developed on a solid theory that supports moving to and reasoning at this level of abstraction and, finally, to support flexible integration of these learning and inference models within an application program. Saul is an object-functional programming language written in Scala that facilitates these by (1) allowing a programmer to learn, name and manipulate named abstractions over relational data; (2) supporting seamless incorporation of trainable (probabilistic or discriminative) components into the program, and (3) providing a level of inference over trainable models to support composition and make decisions that respect domain and application constraints. Saul is developed over a declaratively defined relational data model, can use piecewise learned factor graphs with declaratively specified learning and inference objectives, and it supports inference over probabilistic models augmented with declarative knowledge-based constraints.We describe the key constructs of Saul and exemplify its use in developing applications that require relational feature engineering and structured output prediction.


Saul: Towards Declarative Learning Based Programming

Kordjamshidi, Parisa (University of Illinois at Urbana-Champaign) | Roth, Dan (University of Illinois at Urbana-Champaign) | Wu, Hao (University of Illinois at Urbana-Champaign)

AAAI Conferences

We present Saul, a new probabilistic programming language designed to address some of the shortcomings of programming languages that aim at advancing and simplifying the development of AI systems. Such languages need to interact with messy, naturally occurring data, to allow a programmer to specify what needs to be done at an appropriate level of abstraction rather than at the data level, to be developed on a solid theory that supports moving to and reasoning at this level of abstraction and, finally, to support flexible integration of these learning and inference models within an application program. Saul is an object-functional programming language written in Scala that facilitates these by (1) allowing a programmer to learn, name and manipulate named abstractions over relational data; (2) supporting seamless incorporation of trainable (probabilistic or discriminative) components into the program, and (3) providing a level of inference over trainable models to support composition and make decisions that respect domain and application constraints. Saul is developed over a declaratively defined relational data model, can use piecewise learned factor graphs with declaratively specified learning and inference objectives, and it supports inference over probabilistic models augmented with declarative knowledge-based constraints.We describe the key constructs of Saul and exemplify its use in developing applications that require relational feature engineering and structured output prediction.