If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
I have spent more than 25 years researching and implementing AI technologies -- from the days of IBM's "discovery server" which lead to "OmniFind" and then became part of their flagship AI brand Watson to various iterations of text analytics and machine learning applications to prototypes and deployed projects using conversational technologies. During these cycles of new technology – all of which were marked by hype, missed expectations and eventual adoption -- I have seen patterns of success and failure as the AI landscape has evolved. In The AI-Powered Enterprise, I seek to help businesses deliver AI's promise of revolutionary change. Achieving this goal means avoiding the most common mistakes I've seen companies make, including: Executives tend to think AI technology is beyond their ability to understand. The complex programming may be out of reach for many, but the basic functioning needs to be explainable and understandable in business terms.
When I started learning about the semantic web, it was quite foreign territory and the practitioners all seemed to be talking over my head, so when I began to figure it out, I thought it would be valuable to write an introduction for those interested but a little put off. Well it's a whole bunch of things stitched together with many tools and different technologies and standards. Let's start with the problem that the semantic web is trying to solve. Microsoft explained it very well with its Bing commercials on search overload. Not that Bing solves it, but at least Microsoft is good at explaining the problem.
The infrastructure and tools necessary for large-scale data analytics, formerly the exclusive purview of experts, are increasingly available. Whereas a knowledgeable data-miner or domain expert can rightly be expected to exercise caution when required (for example, around fallacious conclusions supposedly supported by the data), the nonexpert may benefit from some judicious assistance. This article describes an end-to-end learning framework that allows a novice to create models from data easily by helping structure the model building process and capturing extended aspects of domain knowledge. By treating the whole modeling process interactively and exploiting high-level knowledge in the form of an ontology, the framework is able to aid the user in a number of ways, including in helping to avoid pitfalls such as data dredging. Prudence must be exercised to avoid these hazards as certain conclusions may only be supported if, for example, there is extra knowledge which gives reason to trust a narrower set of hypotheses.
One of the most significant developments about the current resurgence of statistical Artificial Intelligence is the emphasis it places on knowledge graphs. These repositories have paralleled the contemporary pervasiveness of machine learning for numerous reasons, from their aptitude for preparing training datasets for this technology to pairing it with AI's knowledge base for consummate AI. Consequently, graph technologies are becoming fairly ubiquitous in a broadening array of solutions from Business Intelligence mechanisms to Digital Asset Management platforms. With tools like GraphQL gaining credence across the data landscape as well, it's not surprising many consider knowledge graphs one of the core technologies shaping modern AI deployments. As such, it's imperative to understand that all graphs are not equal; there are different types and functions ascribed to the various graphs vying for one another for the knowledge graph title.
In this article, we develop a framework for comparing ontologies and place a number of the more prominent ontologies into it. We have selected 10 specific projects for this study, including general ontologies, domain-specific ones, and one knowledge representation system. The comparison framework includes general characteristics, such as the purpose of an ontology, its coverage (general or domain specific), its size, and the formalism used. It also includes the design process used in creating an ontology and the methods used to evaluate it. Characteristics that describe the content of an ontology include taxonomic organization, types of concept covered, top-level divisions, internal structure of concepts, representation of part-whole relations, and the presence and nature of additional axioms.
This article presents the methodology that has been successfully used over the past seven years by an interdisciplinary team to create the International Committee for Documentation of the International Council of Museums (CIDOC) CONCEPTUAL REFERENCE MODEL (CRM), a high-level ontology to enable information integration for cultural heritage data and their correlation with library and archive information. The CIDOC CRM is now in the process to become an International Organization for Standardization (ISO) standard. This article justifies in detail the methodology and design by functional requirements and gives examples of its contents. The CIDOC CRM analyzes the common conceptualizations behind data and metadata structures to support data transformation, mediation, and merging. It is argued that such ontologies are propertycentric, in contrast to terminological systems, and should be built with different methodologies.
While the amount of data stored in current information systems continuously grows, and the processes making use of such data become more and more complex, extracting knowledge and getting insights from these data, as well as governing both data and the associated processes, are still challenging tasks. The problem is complicated by the proliferation of data sources and services both within a single organization, and in cooperating environments. Effectively accessing, integrating and managing data in complex organizations is still one of the main issues faced by the information technology industry today. Indeed, it is not surprising that data scientists spend a comparatively large amount of time in the data preparation phase of a project, compared with the data minining and knowledge discovery phase. Whether you call it data wrangling, data munging, or data integration, it is estimated that 50%-80% of a data scientists time is spent on collecting and organizing data for analysis.
Marketing scientist Kevin Gray asks Dr. Anna Farzindar of the University of Southern California about chatbots and the ways they are used. Is there a formal definition you prefer? Conversational or dialog agents are designed to communicate with us in human language. These software agents are deployed everywhere around us; when talking to your car, communicating with robots, or using your personal assistant on any device or smartphone, such as Alexa, Cortona, SIRI or Google Assistant. The term "chatbot" is often used in industry for conversational agents that can be integrated through any online messaging application.