Goto

Collaborating Authors

 domingo


Incremental Learning of Affordances using Markov Logic Networks

Potter, George, Burghouts, Gertjan, Sijs, Joris

arXiv.org Artificial Intelligence

Abstract--Affordances enable robots to have a semantic understanding of their surroundings. Challenges are contradicting formulas and I. Markov Logic Networks can solve these problems [Richardson and Domingos, 2006], Affordances play an important role in semantic understanding [Domingos and Lowd, 2019]. of scenes in robotics. These affordances, first introduced by Gibson [Gibson, 1979], are the potential actions that an A Markov Logic Network (MLN) is a knowledge object affords to an agent depending on object properties and base of first-order logic formulas with a weight attached state, action effects, situational context and agent capabilities. MLNs can compactly represent the robot, an object, and the possible interactions between the regularities in the world and allow reasoning over these two [Andries et al., 2018]. These affordances allow the robot regularities. The weight of a formula in the knowledge base to reason about its beliefs of the world in relation to the tasks is a measure of how likely that formula is to occur given and actions it may execute within the environment. Table I provides an example MLN in partially known environments, these affordances, in combination that consists of three formulas. The formulas do not conflict with reasoning about them, may result in more options logically, but semantically seem incorrect when taking into for the robot to choose from. As a result affordances increase account that each formula is x, y.


Reviews: Online Structure Learning for Feed-Forward and Recurrent Sum-Product Networks

Neural Information Processing Systems

This paper proposed an online learning algorithm for static and dynamic sum-product networks (SPNs), a type of probabilistic model with tractable inference. The authors essentially combine local structure search in SPNs with a hard variant of expectation-maximization [1]. The algorithm maintains empirical covariance estimates of product nodes and leverages statistical dependence tests to decide when to replace a product (factorized distribution) with either a new leaf or a mixture (sum node). The algorithm further includes a pruning mechanism in order to trim over-grown structures. The proposed method is called online Structure Learning with Running Average Update (oSLRAU).


Unique therapy helps some young people with autism interact better with others

FOX News

Fox News contributor Dr. Marc Siegel unpacks a report suggesting a drug used primarily for seizures and behavioral issues could help treat autism. A New York speech pathologist is using improvisational theater, better known as "improv," to help young adults with autism spectrum disorder (ASD) to develop their social skills. Bob Domingo, PhD, a speech language pathologist and assistant professor at Long Island University Post in Brookville, New York, is combining his skills and love of improv to help those with ASD. "Through improv, I am able to combine my knowledge of speech, language and communication with improv games and activities, to open up new, fun ways to communicate with others in developing spontaneous, unscripted'scenes' or conversations," Domingo told Fox News Digital in an interview. For individuals with ASD, symptoms can vary in severity.


Discriminative Learning of Sum-Product Networks

Neural Information Processing Systems

Sum-product networks are a new deep architecture that can perform fast, exact inference on high-treewidth models. Only generative methods for training SPNs have been proposed to date. In this paper, we present the first discriminative training algorithms for SPNs, combining the high accuracy of the former with the representational power and tractability of the latter. We show that the class of tractable discriminative SPNs is broader than the class of tractable generative ones, and propose an efficient backpropagation-style algorithm for computing the gradient of the conditional log likelihood. Standard gradient descent suffers from the diffusion problem, but networks with many layers can be learned reliably using "hard" gradient descent, where marginal inference is replaced by MPE inference (i.e., inferring the most probable state of the non-evidence variables). The resulting updates have a simple and intuitive form. We test discriminative SPNs on standard image classification tasks. We obtain the best results to date on the CIFAR-10 dataset, using fewer features than prior methods with an SPN architecture that learns local image structure discriminatively. We also report the highest published test accuracy on STL-10 even though we only use the labeled portion of the dataset.


Building Expressive and Tractable Probabilistic Generative Models: A Review

Sidheekh, Sahil, Natarajan, Sriraam

arXiv.org Artificial Intelligence

However, they still struggle to capture dependencies as data complexity and dimensionality increase. We present a comprehensive survey of the advancements In contrast, advancements in deep learning have given rise and techniques in the field of tractable probabilistic to expressive Deep Generative Models (DGMs) that exploit generative modeling, primarily focusing on the power of neural networks to learn flexible representations Probabilistic Circuits (PCs). We provide a unified of complex data distributions. Notable examples include perspective on the inherent trade-offs between expressivity Generative Adversarial Networks, Variational Autoencoders, and the tractability, highlighting the design and Normalizing Flows. These models prioritize expressiveness principles and algorithmic extensions that have and have demonstrated impressive proficiency in enabled building expressive and efficient PCs, and capturing dependencies and generating high fidelity samples.


LogicMP: A Neuro-symbolic Approach for Encoding First-order Logic Constraints

Xu, Weidi, Wang, Jingwei, Xie, Lele, He, Jianshan, Zhou, Hongting, Wang, Taifeng, Wan, Xiaopei, Chen, Jingdong, Qu, Chao, Chu, Wei

arXiv.org Artificial Intelligence

Integrating first-order logic constraints (FOLCs) with neural networks is a crucial but challenging problem since it involves modeling intricate correlations to satisfy the constraints. This paper proposes a novel neural layer, LogicMP, whose layers perform mean-field variational inference over an MLN. It can be plugged into any off-the-shelf neural network to encode FOLCs while retaining modularity and efficiency. By exploiting the structure and symmetries in MLNs, we theoretically demonstrate that our well-designed, efficient mean-field iterations effectively mitigate the difficulty of MLN inference, reducing the inference from sequential calculation to a series of parallel tensor operations. Empirical results in three kinds of tasks over graphs, images, and text show that LogicMP outperforms advanced competitors in both performance and efficiency.


Elon Musk creates AI company to rival OpenAI

Daily Mail - Science & tech

After describing ChatGPT's left-wing bias as'concerning', Elon Musk is now working on a new AI chatbot of his own. The Twitter, Telsa and SpaceX boss has registered a company with the name of'X.AI', a subsidiary under his new conglomerate X Holdings Corp. According to the Financial Times, the new subsidiary will be the home of efforts to build a tool just like the hugely successful ChatGPT, owned by OpenAI. Musk is assembling a team of AI researchers and engineers and is in discussions with some investors in SpaceX and Tesla about putting money into his new venture. Due to Musk's belief in free speech, the new bot product could have less of a left-wing bias than ChatGPT, which has already been criticised for'woke' responses. Twitter, Telsa and SpaceX boss Elon Musk (pictured) has registered an artificial intelligence (AI) company with the name of'X.AI' Mr Musk has been critical of AI-powered chatbot ChatGPT in the past.


Bayesian Structure Scores for Probabilistic Circuits

Yang, Yang, Gala, Gennaro, Peharz, Robert

arXiv.org Artificial Intelligence

Probabilistic circuits (PCs) are a prominent representation of probability distributions with tractable inference. While parameter learning in PCs is rigorously studied, structure learning is often more based on heuristics than on principled objectives. In this paper, we develop Bayesian structure scores for deterministic PCs, i.e., the structure likelihood with parameters marginalized out, which are well known as rigorous objectives for structure learning in probabilistic graphical models. When used within a greedy cutset algorithm, our scores effectively protect against overfitting and yield a fast and almost hyper-parameter-free structure learner, distinguishing it from previous approaches. In experiments, we achieve good trade-offs between training time and model fit in terms of log-likelihood. Moreover, the principled nature of Bayesian scores unlocks PCs for accommodating frameworks such as structural expectation-maximization.


AI cyber attacks are a 'critical threat'. This is how NATO is countering them

#artificialintelligence

Artificial intelligence (AI) is playing a massive role in cyber attacks and is proving both a "double-edged sword" and a "huge challenge," according to NATO. "Artificial intelligence allows defenders to scan networks more automatically, and fend off attacks rather than doing it manually. But the other way around, of course, it's the same game," David van Weel, NATO's Assistant Secretary-General for Emerging Security Challenges, told reporters earlier this month. Cyber attacks, both on national infrastructures and private companies, have ramped up exponentially and become a focal point since the war in Ukraine. NATO said this year that a cyber attack on any of its member states could trigger Article 5, meaning an attack on one member is considered an attack on all of them and could trigger a collective response.


Treatment-RSPN: Recurrent Sum-Product Networks for Sequential Treatment Regimes

Dejl, Adam, Deep, Harsh, Fei, Jonathan, Saeedi, Ardavan, Lehman, Li-wei H.

arXiv.org Artificial Intelligence

Sum-product networks (SPNs) have recently emerged as a novel deep learning architecture enabling highly efficient probabilistic inference. Since their introduction, SPNs have been applied to a wide range of data modalities and extended to time-sequence data. In this paper, we propose a general framework for modelling sequential treatment decision-making behaviour and treatment response using recurrent sum-product networks (RSPNs). Models developed using our framework benefit from the full range of RSPN capabilities, including the abilities to model the full distribution of the data, to seamlessly handle latent variables, missing values and categorical data, and to efficiently perform marginal and conditional inference. Our methodology is complemented by a novel variant of the expectation-maximization algorithm for RSPNs, enabling efficient training of our models. We evaluate our approach on a synthetic dataset as well as real-world data from the MIMIC-IV intensive care unit medical database. Our evaluation demonstrates that our approach can closely match the ground-truth data generation process on synthetic data and achieve results close to neural and probabilistic baselines while using a tractable and interpretable model.