Goto

Collaborating Authors

 solution


One Solution is Not All You Need: Few-Shot Extrapolation via Structured MaxEnt RL

Neural Information Processing Systems

While reinforcement learning algorithms can learn effective policies for complex tasks, these policies are often brittle to even minor task variations, especially when variations are not explicitly provided during training. One natural approach to this problem is to train agents with manually specified variation in the training task or environment. However, this may be infeasible in practical situations, either because making perturbations is not possible, or because it is unclear how to choose suitable perturbation strategies without sacrificing performance. The key insight of this work is that learning diverse behaviors for accomplishing a task can directly lead to behavior that generalizes to varying environments, without needing to perform explicit perturbations during training. By identifying multiple solutions for the task in a single environment during training, our approach can generalize to new situations by abandoning solutions that are no longer effective and adopting those that are. We theoretically characterize a robustness set of environments that arises from our algorithm and empirically find that our diversity-driven approach can extrapolate to various changes in the environment and task.


On the Representation of Solutions to Elliptic PDEs in Barron Spaces

Neural Information Processing Systems

Numerical solutions to high-dimensional partial differential equations (PDEs) based on neural networks have seen exciting developments. This paper derives complexity estimates of the solutions of $d$-dimensional second-order elliptic PDEs in the Barron space, that is a set of functions admitting the integral of certain parametric ridge function against a probability measure on the parameters. We prove under some appropriate assumptions that if the coefficients and the source term of the elliptic PDE lie in Barron spaces, then the solution of the PDE is $\epsilon$-close with respect to the $H^1$ norm to a Barron function. Moreover, we prove dimension-explicit bounds for the Barron norm of this approximate solution, depending at most polynomially on the dimension $d$ of the PDE. As a direct consequence of the complexity estimates, the solution of the PDE can be approximated on any bounded domain by a two-layer neural network with respect to the $H^1$ norm with a dimension-explicit convergence rate.


Digital Addiction Among Arab Families: Status, Contributing Factors, Responsibilities, and Solutions

Communications of the ACM

Membership in ACM includes a subscription to Communications of the ACM (CACM), the computing industry's most trusted source for staying connected to the world of advanced computing. Studies conducted with families in the Arab GCC region found that digital addiction is highly prevalent among both parents and children. Digital addiction (DA) refers to a problematic relationship with technology characterized by symptoms of behavioral addiction, including mood modification, salience, tolerance, conflict, withdrawal symptoms, and relapse. While addictive use of technology is not yet officially recognized as a clinical diagnosis, certain forms, such as Internet gaming disorder (IGD), have been classified as clinical conditions. Notably, IGD was included in the ICD-11 (International Classification of Diseases) by the World Health Organization in 2018.


Improved Uncertainty Quantification in Physics-Informed Neural Networks Using Error Bounds and Solution Bundles

Flores, Pablo, Graf, Olga, Protopapas, Pavlos, Pichara, Karim

arXiv.org Machine Learning

Physics-Informed Neural Networks (PINNs) have been widely used to obtain solutions to various physical phenomena modeled as Differential Equations. As PINNs are not naturally equipped with mechanisms for Uncertainty Quantification, some work has been done to quantify the different uncertainties that arise when dealing with PINNs. In this paper, we use a two-step procedure to train Bayesian Neural Networks that provide uncertainties over the solutions to differential equation systems provided by PINNs. We use available error bounds over PINNs to formulate a heteroscedastic variance that improves the uncertainty estimation. Furthermore, we solve forward problems and utilize the obtained uncertainties when doing parameter estimation in inverse problems in cosmology.


Reviews: Approximate maximum entropy principles via Goemans-Williamson with applications to provable variational methods

Neural Information Processing Systems

This is a nice paper, a bit of an odd match for NIPS (there are no numerical experiments, and in spite of claims of genericity and applicability to general exponential families, I remain unconvinced). The methods are elegant, though I did find the presentation a bit lacking. I would have loved a high-level detail of the proof steps and proof intuition, with pointers to precise sub-proposition statements and corresponding proofs. Right now, it is easy to get lost in the details, and what appears to me as the key moments of the proof are skimmed over quickly. For instance, lemma 3.1 deserved to be expanded upon (even the long version is a bit quick on details here) - this is especially since the GW proof technique is so elegant, it's always nice to include (even if similar to the original proof).


BBA: Bi-Modal Behavioral Alignment for Reasoning with Large Vision-Language Models

Zhao, Xueliang, Huang, Xinting, Fu, Tingchen, Li, Qintong, Gong, Shansan, Liu, Lemao, Bi, Wei, Kong, Lingpeng

arXiv.org Artificial Intelligence

Multimodal reasoning stands as a pivotal capability for large vision-language models (LVLMs). The integration with Domain-Specific Languages (DSL), offering precise visual representations, equips these models with the opportunity to execute more accurate reasoning in complex and professional domains. However, the vanilla Chain-of-Thought (CoT) prompting method faces challenges in effectively leveraging the unique strengths of visual and DSL representations, primarily due to their differing reasoning mechanisms. Additionally, it often falls short in addressing critical steps in multi-step reasoning tasks. To mitigate these challenges, we introduce the \underline{B}i-Modal \underline{B}ehavioral \underline{A}lignment (BBA) prompting method, designed to maximize the potential of DSL in augmenting complex multi-modal reasoning tasks. This method initiates by guiding LVLMs to create separate reasoning chains for visual and DSL representations. Subsequently, it aligns these chains by addressing any inconsistencies, thus achieving a cohesive integration of behaviors from different modalities. Our experiments demonstrate that BBA substantially improves the performance of GPT-4V(ision) on geometry problem solving ($28.34\% \to 34.22\%$), chess positional advantage prediction ($42.08\% \to 46.99\%$) and molecular property prediction ($77.47\% \to 83.52\%$).


PrecisePK Collaborates with Wolters Kluwer to Enhance Dose Optimization

#artificialintelligence

PrecisePK announced that they will collaborate with Wolters Kluwer, a global provider of trusted clinical technology and evidence-based solutions, to offer an integrated Bayesian dosing solution through Sentri7 Pharmacy in early 2023. With PrecisePK's model-informed precision dosing (MIPD) software, Sentri7 Pharmacy will deliver a comprehensive drug package that supports vancomycin and 20 other medications. "Our PrecisePK relationship will enable our users to leverage data and information to make better medication dosing decisions, improve patient safety, and drive better clinical outcomes," said Karen Kobelski, Vice President & General Manager, Clinical Surveillance Compliance & Data Solutions, Wolters Kluwer, Health. "Hospitals are short-staffed and clinicians are busier than ever, so we're always looking for ways to simplify clinician workloads and facilitate patient management. This relationship allows us to deliver a solution to help achieve these goals."


La veille de la cybersécurité

#artificialintelligence

Explainable AI refers to strategies and procedures used in the application of artificial intelligence (AI) that allow human specialists to understand the solution's findings. Explainable AI refers to strategies and procedures used in the application of artificial intelligence (AI) that allow human specialists to understand the solution's findings. To ensure that explanation methods are correct, they must be systematically reviewed and compared. In contrast to achieving quantitative explanation, in this article, we will discuss Quantus, a Python library that evaluates a convolutional neural network's working, predictions and explanation of parameters. Below is the list of major points that will be discussed in this article.


AI, AI on the wall -- Who's the Fairest of them all?

#artificialintelligence

"A world perfectly fair in some dimensions would be horribly unfair in others." "Fairness" in Artificial Intelligence (AI) applications -- both as a concept and a practice -- is the focus of many organisations as they deploy new technologies for greater effectiveness and efficiencies. That machines are faster at processing large amounts of information and the notion that they are'more objective' than humans, appear to make them an obvious choice for progressivity and seemingly impartial actors in'fairer' decision-making. Yet, algorithmic based decisions have not come without their share of controversies -- Australia's recent'robo-debt' government intervention which wrongly pursued thousands of welfare recipients; the UK's'A-Levels fiasco' of downgrading graduating grades based on historical data, its controversial visa application streaming tool; and concerns about Clearview AI's facial recognition software for policing are raising new questions on the role of these technologies in society. Risk assessments are part of the fabric of modern society, but what we are dealing with here is not just'scaling up' human capacity for decision-making without the unwanted human biases and errors -- we are also extolling the'virtues of objectivity' under the guise of'fairness' (which is inherently subjective!) and failing to recognise the many inter-relationships that are being unraveled through the use of these algorithms in our daily lives.


What about some human intelligence first?

#artificialintelligence

Artificial intelligence (AI) is all the rage these days. A recent article noted that'robots' -- shorthand for AI in the tabloids -- will be able to write a fiction bestseller within 50 years. I suppose that would be shocking to me as a novelist if most fiction bestsellers were not already being written by'robots'. Or so one feels, keeping publishing and other vogues in mind: a bit of this, a bit of that, a dash of something else, and voila, you have a bestseller! In that sense, perhaps the rise of AI will make us reconsider what we mean by human intelligence.