Goto

Collaborating Authors

 feature vector




DA T ASHEET: MOTIVE

Neural Information Processing Systems

Please see the most updated version here . Was there a specific task in mind? Was there a specific gap that needed to be filled? The MOTI VE dataset was created to promote the development of new drug-target interaction (DTI) prediction models based on both, existing relationships between compounds and their protein targets, and the similarity of JUMP Cell Painting morphological features of perturbed cells [2].The MOTI VE dataset was created with the DTI task in mind, and addresses a lack of graph-based biological datasets with empirical node features. Who created this dataset (e.g., which team, research group) and on behalf of which entity (e.g., company, institution, organization)? This dataset was created by the Carpenter-Singh Lab in the Imaging Platform at the Broad Institute of MIT and Harvard, Cambridge, Massachusetts. What support was needed to make this dataset? If there is an associated grant, provide the name of the grantor and the grant name and number, or if it was supported by a company or government agency, give those details.) The authors gratefully acknowledge an internship from the Massachusetts Life Sciences Center (to ES).




Logical Characterizations of Recurrent Graph Neural Networks with Reals and Floats

Neural Information Processing Systems

In pioneering work from 2019, Barceló and coauthors identified logics that precisely match the expressive power of constant iteration-depth graph neural networks (GNNs) relative to properties definable in first-order logic. In this article, we give exact logical characterizations of recurrent GNNs in two scenarios: (1) in the setting with floating-point numbers and (2) with reals. For floats, the formalism matching recurrent GNNs is a rule-based modal logic with counting, while for reals we use a suitable infinitary modal logic, also with counting. These results give exact matches between logics and GNNs in the recurrent setting without rel-ativising to a background logic in either case, but using some natural assumptions about floating-point arithmetic. Applying our characterizations, we also prove that, relative to graph properties definable in monadic second-order logic (MSO), our infinitary and rule-based logics are equally expressive. This implies that recurrent GNNs with reals and floats have the same expressive power over MSO-definable properties and shows that, for such properties, also recurrent GNNs with reals are characterized by a (finitary!)





Volume Feature Rendering for Fast Neural Radiance Field Reconstruction

Neural Information Processing Systems

Neural radiance fields (NeRFs) are able to synthesize realistic novel views from multi-view images captured from distinct positions and perspectives. In NeRF's rendering pipeline, neural networks are used to represent a scene independently or transform queried learnable feature vector of a point to the expected color or density. With the aid of geometry guides either in the form of occupancy grids or proposal networks, the number of color neural network evaluations can be reduced from hundreds to dozens in the standard volume rendering framework.