Goto

Collaborating Authors

assertion


Bad things will happen when the AI sentience debate goes mainstream

#artificialintelligence

A Google AI engineer recently stunned the world by announcing that one of the company's chatbots had become sentient. He was subsequently placed on paid administrative leave for his outburst. His name is Blake Lemoine and he sure seems like the right person to talk about machines with souls. Not only is he a professional AI developer at Google, but he's also a Christian priest. The only problem is that the whole concept is ridiculous and dangerous.


A Glossary of Knowledge Graph Terms - DataScienceCentral.com

#artificialintelligence

As with many fields, knowledge graphs boast a wide array of specialized terms. This guide provides a handy reference to these concepts. The Resource Description Framework (or RDF) is a conceptual framework established in the early 2000s by the World Wide Web Consortium for describing sets of interrelated assertions. RDF breaks down such assertions into underlying graph structures in which a subject node is connected to an object node via a predicate edge. The graph then is constructed by connecting the object nodes of one assertion to the subject nodes of another assertion, in a manner analogous to Tinker Toys (or molecular diagrams).


Adding RDF Lists and Sequences To Sparql - DataScienceCentral.com

#artificialintelligence

This particular article is a discussion about a recommendation to a given standard, that of SPARQL 1.1. None of this has been implemented yet, and as such represents more or less the muiings of a writer, rather than established functionality. Lately, I've been spending some time on the Github archives of the SPARQL 1.2 Community site, a group of people who are looking at the next generation of the SPARQL language. One challenge that has come up frequently has been the lack of good mechanisms in SPARQL for handling ordered lists, something that has proven to be a limiting factor in a lot of ways, especially given that most other languages have had the ability of handling lists and dictionaries for decades. As I was going through the archives, an answer occurred to me that comes down to the fact that RDF and SPARQL, while very closely related, are not in fact the same things.


Verizon lengthy vary drone challenge set to launch in Oregon - Channel969

#artificialintelligence

Anticipate massive issues to come back out of this newest operation from telecom big Verizon. A Verizon lengthy vary drone challenge is ready to advance ongoing testing -- this time on the Pendleton Unmanned Aerial Methods Vary. The drone vary in Pendleton, Oregon, is certainly one of only a small handful of Federal Aviation Administration-designated check ranges, and is positioned within the northeast nook of the state of Oregon. Verizon's drone arm, Skyward, relies in Portland, Oregon, which is a roughly 3-hour drive from Pendleton. On the Pendleton drone vary, Verizon Robotics (which is a division of the corporate most famously recognized for offering you with cell service) will check numerous proof-of-concept capabilities primarily round lengthy vary robotics.


Wu

AAAI Conferences

We develop a novel absorption technique for large collections of factual assertions about individual objects. These assertions are commonly accompanied by implicit background knowledge and form a knowledge base. Both the assertions and the background knowledge are expressed in a suitable language of Description Logic and queries over such knowledge bases can be expressed as assertion retrieval queries. The proposed absorption technique significantly improves the performance of such queries, in particular in cases where a large number of object features are known for the objects represented in such a knowledge base. In addition to the absorption technique we present the results of a preliminary experimental evaluation that validates the efficacy of the proposed optimization.


Scaling Up Knowledge Graph Creation to Large and Heterogeneous Data Sources

arXiv.org Artificial Intelligence

RDF knowledge graphs (KG) are powerful data structures to represent factual statements created from heterogeneous data sources. KG creation is laborious, and demands data management techniques to be executed efficiently. This paper tackles the problem of the automatic generation of KG creation processes declaratively specified; it proposes techniques for planning and transforming heterogeneous data into RDF triples following mapping assertions specified in the RDF Mapping Language (RML). Given a set of mapping assertions, the planner provides an optimized execution plan by partitioning and scheduling the execution of the assertions. First, the planner assesses an optimized number of partitions considering the number of data sources, type of mapping assertions, and the associations between different assertions. After providing a list of partitions and assertions that belong to each partition, the planner determines their execution order. A greedy algorithm is implemented to generate the partitions' bushy tree execution plan. Bushy tree plans are translated into operating system commands that guide the execution of the partitions of the mapping assertions in the order indicated by the bushy tree. The proposed optimization approach is evaluated over state-of-the-art RML-compliant engines and existing benchmarks of data sources and RML triples maps. Our experimental results suggest that the performance of the studied engines can be considerably improved, particularly in a complex setting with numerous triples maps and data sources. As a result, engines that usually time in complex cases out can, if not entirely execute all the assertions, still produce a portion of the KG.


A Unified and Constructive Framework for the Universality of Neural Networks

arXiv.org Machine Learning

One of the reasons why many neural networks are capable of replicating complicated tasks or functions is their universal property. Though the past few decades have seen tremendous advances in theories of neural networks, a single constructive framework for neural network universality remains unavailable. This paper is an effort to provide a unified and constructive framework for the universality of a large class of activations including most of existing ones. At the heart of the framework is the concept of neural network approximate identity (nAI). The main result is: {\em any nAI activation function is universal}. It turns out that most of existing activations are nAI, and thus universal in the space of continuous functions on compacta. The framework has the following main properties. First, it is constructive with elementary means from functional analysis, probability theory, and numerical analysis. Second, it is the first unified attempt that is valid for most of existing activations. Third, as a by product, the framework provides the first university proof for some of the existing activation functions including Mish, SiLU, ELU, GELU, and etc. Fourth, it provides new proofs for most activation functions. Fifth, it discovers new activations with guaranteed universality property. Sixth, for a given activation and error tolerance, the framework provides precisely the architecture of the corresponding one-hidden neural network with predetermined number of neurons, and the values of weights/biases. Seventh, the framework allows us to abstractly present the first universal approximation with favorable non-asymptotic rate.


Where Semantics and Machine Learning Converge

#artificialintelligence

Artificial Intelligence has a long history of oscillating between two somewhat contradictory poles. On one side, exemplified by Noam Chomsky, Marvin Minsky, Seymour Papert, and many others, is the idea that cognitive intelligence was algorithmic in nature - that there were a set of fundamental precepts that formed the foundation of language, and by extension, intelligence. On the other side were people like Donald Hebb, Frank Rosenblatt, Wesley Clarke, Henry Kelly, Arthur Bryson, Jr., and others, most not even as remotely well known, who developed over time gradient descent, genetic algorithms, back propagation and other pieces of what would become known as neural networks. The rivalry between the two camps was fierce, and for a while, after Minsky and Papert's fairly damning analysis of Rosenblatt's Perceptron, one of the first neural model, it looked like the debate had been largely settled in the direction of the algorithmic approach. In hindsight, the central obstacle that both sides faced (and one that would put artificial intelligence research into a deep winter for more than a decade) was that both underestimated how much computing power would be needed for either one of the models to actually bear fruit, and it would take another fifty years (and an increase of computing factor by twenty-one orders of magnitude, around 1 quadrillion times) before computers and networks reached a point where either of these technologies was feasible. As it turns out, both sides were actually right in some areas and wrong in others.


Refined Commonsense Knowledge from Large-Scale Web Contents

arXiv.org Artificial Intelligence

Commonsense knowledge (CSK) about concepts and their properties is useful for AI applications. Prior works like ConceptNet, COMET and others compiled large CSK collections, but are restricted in their expressiveness to subject-predicate-object (SPO) triples with simple concepts for S and strings for P and O. This paper presents a method, called ASCENT++, to automatically build a large-scale knowledge base (KB) of CSK assertions, with refined expressiveness and both better precision and recall than prior works. ASCENT++ goes beyond SPO triples by capturing composite concepts with subgroups and aspects, and by refining assertions with semantic facets. The latter is important to express the temporal and spatial validity of assertions and further qualifiers. ASCENT++ combines open information extraction with judicious cleaning and ranking by typicality and saliency scores. For high coverage, our method taps into the large-scale crawl C4 with broad web contents. The evaluation with human judgements shows the superior quality of the ASCENT++ KB, and an extrinsic evaluation for QA-support tasks underlines the benefits of ASCENT++. A web interface, data and code can be accessed at https://www.mpi-inf.mpg.de/ascentpp.


Answering Fuzzy Queries over Fuzzy DL-Lite Ontologies

arXiv.org Artificial Intelligence

A prominent problem in knowledge representation is how to answer queries taking into account also the implicit consequences of an ontology representing domain knowledge. While this problem has been widely studied within the realm of description logic ontologies, it has been surprisingly neglected within the context of vague or imprecise knowledge, particularly from the point of view of mathematical fuzzy logic. In this paper we study the problem of answering conjunctive queries and threshold queries w.r.t. ontologies in fuzzy DL-Lite. Specifically, we show through a rewriting approach that threshold query answering w.r.t. consistent ontologies remains in $AC_0$ in data complexity, but that conjunctive query answering is highly dependent on the selected triangular norm, which has an impact on the underlying semantics. For the idempodent G\"odel t-norm, we provide an effective method based on a reduction to the classical case. This paper is under consideration in Theory and Practice of Logic Programming (TPLP).