Plotting

 Suryanarayanan, Parthasarathy


MAMMAL -- Molecular Aligned Multi-Modal Architecture and Language

arXiv.org Artificial Intelligence

Drug discovery typically consists of multiple steps, including identifying a target protein key to a disease's etiology, validating that interacting with this target could prevent symptoms or cure the disease, discovering a small molecule or biologic therapeutic to interact with it, and optimizing the candidate molecule through a complex landscape of required properties. Drug discovery related tasks often involve prediction and generation while considering multiple entities that potentially interact, which poses a challenge for typical AI models. For this purpose we present MAMMAL - Molecular Aligned Multi-Modal Architecture and Language - a method that we applied to create a versatile multi-task multi-align foundation model that learns from large-scale biological datasets (2 billion samples) across diverse modalities, including proteins, small molecules, and genes. We introduce a prompt syntax that supports a wide range of classification, regression, and generation tasks. It allows combining different modalities and entity types as inputs and/or outputs. Our model handles combinations of tokens and scalars and enables the generation of small molecules and proteins, property prediction, and transcriptomic lab test predictions. We evaluated the model on 11 diverse downstream tasks spanning different steps within a typical drug discovery pipeline, where it reaches new SOTA in 9 tasks and is comparable to SOTA in 2 tasks. This performance is achieved while using a unified architecture serving all tasks, in contrast to the original SOTA performance achieved using tailored architectures. The model code and pretrained weights are publicly available at https://github.com/BiomedSciAI/biomed-multi-alignment and https://huggingface.co/ibm/biomed.omics.bl.sm.ma-ted-458m.


Multi-view biomedical foundation models for molecule-target and property prediction

arXiv.org Artificial Intelligence

Drug discovery is a complex, multi-stage process. Lead identification and lead optimization remain costly with low success-rates and computational methods play an important role in accelerating these tasks [1-3]. The prediction of a broad range of chemical and biological properties of candidate molecules is an essential component of screening and assessing molecules and data-driven, machine learning approaches have long aided in this process [4-6]. Molecular representations form the basis of machine learning models [2, 7], facilitating algorithmic and scientific advances in the field. However, learning useful and generalized latent representation is a hard problem due to limited amounts of labeled data, wide ranges of downstream tasks, vast chemical space, and large heterogeneity in molecular structures. Learning latent representations using unsupervised techniques is vital for such models to scale. Large language models (LLMs) have revolutionized other fields [8] and similar sequence-based foundation models have shown promise to learn molecular representations and be trainable on many downstream property prediction tasks [9-11]. A key advantage is that the transformer based architecture can learn in a self-supervised fashion to create a "pre-trained" molecular representation. The most direct application of LLM like transformers is facilitated by a sequence, text-based representation (e.g.


A Novel Methodology For Crowdsourcing AI Models in an Enterprise

arXiv.org Artificial Intelligence

The evolution of AI is advancing rapidly, creating both challenges and opportunities for industry-community collaboration. In this work, we present a novel methodology aiming to facilitate this collaboration through crowdsourcing of AI models. Concretely, we have implemented a system and a process that any organization can easily adopt to host AI competitions. The system allows them to automatically harvest and evaluate the submitted models against in-house proprietary data and also to incorporate them as reusable services in a product.


A Canonical Architecture For Predictive Analytics on Longitudinal Patient Records

arXiv.org Artificial Intelligence

The architecture Many institutions within the healthcare ecosystem are making is designed to accommodate trust and reproducibility as significant investments in AI technologies to optimize their business an inherent part of the AI life cycle and support the needs for a operations at lower cost with improved patient outcomes. Despite deployed AI system in healthcare. In what follows, we start with the hype with AI, the full realization of this potential is seriously a crisp articulation of challenges that we have identified to derive hindered by several systemic problems, including data privacy, the requirements for this architecture. We then follow with a description security, bias, fairness, and explainability. In this paper, we propose of this architecture before providing qualitative evidence a novel canonical architecture for the development of AI models of its capabilities in real world settings.