Goto

Collaborating Authors

 leone


Leone

AAAI Conferences

Datalog is the extension of Datalog, allowing existentially quantified variables in rule heads. This language is highly expressive and enables easy and powerful knowledge-modeling, but the presence of existentially quantified variables makes reasoning over Datalog E undecidable, in the general case. The results in this paper enable powerful, yet decidable and efficient reasoning (query answering) on top of Datalog programs. On the theoretical side, we define the class of parsimonious Datalog programs, and show that it allows of decidable and efficiently-computable reasoning. Unfortunately, we can demonstrate that recognizing parsimony is undecidable. However, we single out Shy, an easily recognizable fragment of parsimonious programs, that significantly extends both Datalog and Linear-Datalog, while preserving the same (data and combined) complexity of query answering over Datalog, although the addition of existential quantifiers. On the practical side, we implement a bottom-up evaluation strategy for Shy programs inside the DLV system, enhancing the computation by a number of optimization techniques to result in DLV -- a powerful system for answering conjunctive queries over Shy programs, which is profitably applicable to ontology-based query answering. Moreover, we carry out an experimental analysis, comparing DLV against a number of state-of-the-art systems for ontology-based query answering. The results confirm the effectiveness of DLV, which outperforms all other systems in the benchmark domain.


New AI, data management features highlight ThoughtSpot 6.2

#artificialintelligence

New augmented intelligence and no-code data management capabilities in ThoughtSpot 6.2 aim to make the BI tool easier and faster for users to explore data. ThoughtSpot, a BI vendor founded in 2012 and based in Sunnyvale, Calif., unveiled its latest platform update on Wednesday with 10 new features now generally available. ThoughtSpot 6.2 includes Answer Explorer 2, a search tool that utilizes AI and machine learning to not only help customers run queries but guide users to questions they didn't think to ask on their own. The feature is able to recommend additional searches based on users' previous activity, and over time continuously improves as it learns more about users' needs. In addition, DataFlow improves the data management capabilities of ThoughtSpot's platform by enabling customers to simply point and click to load their data into Falcon, the vendor's in-memory database.


Is it best to buy or build a network automation system?

#artificialintelligence

Mike Leone, an analyst at Enterprise Strategy Group in Milford, Mass., sees incredible traction for AI and machine learning in enterprise IT. While AI and machine learning are top priorities for companies working toward digital transformation, he said, investment remains modest as a result of the infrastructure costs associated with these new technologies. Both fields rely heavily on different elements of the technology stack, from physical hardware supporting storage, compute and networking to software that handles compliance and other requirements. Yet enterprises still struggle to have all their networking infrastructure in sync, citing security, compliance, and to a lesser extent, big data, as the "weak links" in the chain, according to Leone. A majority of organizations rely on three different tools to develop, test, deploy and manage machine learning models, ESG said.


Intel Habana: What Does It Mean For AI (Artificial Intelligence)?

#artificialintelligence

FILE - In this Oct. 1, 2019, file photo the symbol for Intel appears on a screen at the Nasdaq ... [ ] MarketSite, in New York. Intel said Monday, Dec. 16, that it has bought Israeli artificial intelligence startup Habana Labs for $2 billion. This week Intel agreed to pay roughly $2 billion for Habana Labs. Based in Israel and founded in 2015, the company is a startup focused on AI (Artificial Intelligence) chips. Keep in mind that Habana has raised a total of $75 million, which is a fairly modest amount for a hardware company (Intel Capital was one of the investors). According Intel's executive vice president and general manager of the Data Platforms Group, Navin Shenoy: "This acquisition advances our AI strategy, which is to provide customers with solutions to fit every performance need–from the intelligent edge to the data center.


Qlik Sense Business improves Qlik's cloud, AI capabilities

#artificialintelligence

With the release of Qlik Sense Business on Tuesday, Qlik extended the reach of its cloud-first capabilities. The offering replaces Qlik Sense Cloud Business, which the analytics and business intelligence vendor, based in King of Prussia, Pa., debuted in 2015. In addition, Qlik rolled out Qlik Sense September 2019, the latest update of its central BI product. Qlik Sense Business is a SaaS offering built on third-generation BI capabilities -- augmented intelligence and machine learning. It differs from Qlik Sense Cloud Business by removing limits on the number of users, connecting more seamlessly to Qlik Sense Enterprise and providing expanded AI and machine learning capabilities.


Externally Supported Models for Efficient Computation of Paracoherent Answer Sets

Amendola, Giovanni (University of Calabria) | Dodaro, Carmine (University of Genova) | Faber, Wolfgang (University of Huddersfield) | Ricca, Francesco (University of Calabria)

AAAI Conferences

Answer Set Programming (ASP) is a well-established formalism for nonmonotonic reasoning.While incoherence, the non-existence of answer sets for some programs, is an important feature of ASP, it has frequently been criticised and indeed has some disadvantages, especially for query answering.Paracoherent semantics have been suggested as a remedy, which extend the classical notion of answer sets to draw meaningful conclusions also from incoherent programs. In this paper we present an alternative characterization of the two major paracoherent semantics in terms of (extended) externally supported models. This definition uses a transformation of ASP programs that is more parsimonious than the classic epistemic transformation used in recent implementations.A performance comparison carried out on benchmarks from ASP competitions shows that the usage of the new transformation brings about performance improvements that are independent of the underlying algorithms.


On the Computation of Paracoherent Answer Sets

Amendola, Giovanni (University of Calabria) | Dodaro, Carmine (University of Calabria) | Faber, Wolfgang ( University of Huddersfield ) | Leone, Nicola (University of Calabria) | Ricca, Francesco (University of Calabria)

AAAI Conferences

Answer Set Programming (ASP) is a well-established formalism for nonmonotonic reasoning. An ASP program can have no answer set due to cyclic default negation. In this case, it is not possible to draw any conclusion, even if this is not intended. Recently, several paracoherent semantics have been proposed that address this issue,and several potential applications for these semantics have been identified. However, paracoherent semantics have essentially been inapplicable in practice, due to the lack of efficient algorithms and implementations. In this paper, this lack is addressed, and several different algorithms to compute semi-stable and semi-equilibrium models are proposed and implemented into an answer set solving framework. An empirical performance comparison among the new algorithms on benchmarks from ASP competitions is given as well.


Answer Sets and the Language of Answer Set Programming

Lifschitz, Vladimir (University of Texas at Austin)

AI Magazine

Its main ideas are described in the article by Janhunen and Niemelä (2016) and in other contributions to this special issue. In this introductory article my goal is to discuss the concept of an answer set, or stable model, which defines the semantics of ASP languages. The answer sets of a logic program are sets of atomic formulas without variables ("ground atoms"), and they were introduced in the course of research on the semantics of negation in Prolog. For this reason, I will start with examples illustrating the relationship between answer sets and Prolog and the relationship between answer set solvers and Prolog systems. Then I will review the mathematical definition of an answer set and discuss some extensions of the basic language of ASP.


Grounding and Solving in Answer Set Programming

Kaufmann, Benjamin (University of Potsdam) | Leone, Nicola (University of Calabria) | Perri, Simona (University of Calabria) | Schaub, Torsten (University of Potsdam)

AI Magazine

At first, a problem is expressed as a logic program. ASP's success is largely due to the availability of a rich modeling language (Gebser and Schaub 2016) along with effective systems. Early ASP solvers SModels (Simons, Niemelä, and Soininen 2002) and DLV (Leone et al. 2006) were followed by SAT DLV (Faber, Leone, and Perri 2012) or GrinGo (Gebser ground rules, corresponding to the number of net al. 2011) are based on seminaive database evaluation tuples, over a set of two elements. For more details techniques (Ullman 1988) for avoiding duplicate about complexity of ASP the reader may refer to work during grounding. Grounding is seen as an iterative Dantsin et al. (2001).


Unfounded Sets and Well-Founded Semantics of Answer Set Programs with Aggregates

Alviano, M., Calimeri, F., Faber, W., Leone, N., Perri, S.

Journal of Artificial Intelligence Research

Logic programs with aggregates (LPA) are one of the major linguistic extensions to Logic Programming (LP). In this work, we propose a generalization of the notions of unfounded set and well-founded semantics for programs with monotone and antimonotone aggregates (LPAma programs). In particular, we present a new notion of unfounded set for LPAma programs, which is a sound generalization of the original definition for standard (aggregate-free) LP. On this basis, we define a well-founded operator for LPAma programs, the fixpoint of which is called well-founded model (or well-founded semantics) for LPAma programs. The most important properties of unfounded sets and the well-founded semantics for standard LP are retained by this generalization, notably existence and uniqueness of the well-founded model, together with a strong relationship to the answer set semantics for LPAma programs. We show that one of the D-well-founded semantics, defined by Pelov, Denecker, and Bruynooghe for a broader class of aggregates using approximating operators, coincides with the well-founded model as defined in this work on LPAma programs. We also discuss some complexity issues, most importantly we give a formal proof of tractable computation of the well-founded model for LPA programs. Moreover, we prove that for general LPA programs, which may contain aggregates that are neither monotone nor antimonotone, deciding satisfaction of aggregate expressions with respect to partial interpretations is coNP-complete. As a consequence, a well-founded semantics for general LPA programs that allows for tractable computation is unlikely to exist, which justifies the restriction on LPAma programs. Finally, we present a prototype system extending DLV, which supports the well-founded semantics for LPAma programs, at the time of writing the only implemented system that does so. Experiments with this prototype show significant computational advantages of aggregate constructs over equivalent aggregate-free encodings.