Boolformer: Symbolic Regression of Logic Functions with Transformers

d'Ascoli, Stéphane, Bengio, Samy, Susskind, Josh, Abbé, Emmanuel

arXiv.org Artificial Intelligence 

Deep neural networks, in particuler those based on the Transformer architecture [1], have lead to breakthroughs in computer vision [2] and language modelling [3], and have fuelled the hopes to accelerate scientific discovery [4]. However, their ability to perform simple logic tasks remains limited [5]. These tasks differ from traditional vision or language tasks in the combinatorial nature of their input space, which makes representative data sampling challenging. Reasoning tasks have thus gained major attention in the deep learning community, either with explicit reasoning in the logical domain, e.g., tasks in the realm of arithmetic and algebra [6, 7], algorithmic CLRS tasks [8] or LEGO [9], or implicit reasoning in other modalities, e.g., benchmarks such as Pointer Value Retrieval [10] and Clevr [11] for vision models, or LogiQA [12] and GSM8K [13] for language models. Reasoning also plays a key role in tasks which can be tackled via Boolean modelling, particularly in the fields of biology [14] and medecine [15].

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found