If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
In a classic experiment on human social intelligence by psychologists Felix Warneken and Michael Tomasello, an 18-month old toddler watches a man carry a stack of books towards an unopened cabinet. When the man reaches the cabinet, he clumsily bangs the books against the door of the cabinet several times, then makes a puzzled noise. Something remarkable happens next: the toddler offers to help. Having inferred the man's goal, the toddler walks up to the cabinet and opens its doors, allowing the man to place his books inside. But how is the toddler, with such limited life experience, able to make this inference?
Computer vision enables computers to understand the content of images and videos. The goal in computer vision is to automate tasks that the human visual system can do. Computer vision tasks include image acquisition, image processing, and image analysis. The image data can come in different forms, such as video sequences, view from multiple cameras at different angles, or multi-dimensional data from a medical scanner. Labelme: A large dataset created by the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) containing 187,240 images, 62,197 annotated images, and 658,992 labeled objects.
In this paper, we study how graph transformations based on sesqui-pushout rewriting can be reversed and how the composition of rewrites can be constructed. We illustrate how such reversibility and composition can be used to design an audit trail system for individual graphs and graph hierarchies. This provides us with a compact way to maintain the history of updates of an object, including its multiple versions. The main application of the designed framework is an audit trail of updates to knowledge represented by hierarchies of graphs. Therefore, we introduce the notion of rule hierarchy that represents a transformation of the entire hierarchy, study how rule hierarchies can be applied to hierarchies and analyse the conditions under which this application is reversible. We then present a theory for constructing the composition of consecutive hierarchy rewrites. The prototype audit trail system for transformations in hierarchies of simple graphs with attributes is implemented as part of the ReGraph Python library.
One of the most important causes of frequent confusion, is the difference between Artificial Intelligence and Artificial General Intelligence. Yes AI is pretty much just processing information but because intelligence has been redefined in this field. It's like how physicists use the term information in the context of entropy, it's a crucial, but somewhat slippery, distinction that contradicts everyday usage of the word. AI was originally intended to be used like AGI is now, but as people worked on it & we have started to understand how sophisticated things our brains do that we take for granted, the goalposts have moved. And AI has come to mean any little step on the journey to more sophisticated computers.
Modern machine learning research has demonstrated remarkable achievements. Today, we can train machines to detect objects in images, extract meaning from text, stop spam emails, drive cars, discover new drug candidates, and beat top players in Chess, Go, and countless other games. A lot of these advancements are powered by deep learning, in particular deep neural networks. Yet, the theory behind deep neural networks remains poorly understood. Sure, we understand the math of what individual neurons are doing, but we're lacking a mathematical theory of the emergent behavior of entire network.
Activity recognition using built-in sensors in smart and wearable devices provides great opportunities to understand and detect human behavior in the wild and gives a more holistic view of individuals' health and well being. Numerous computational methods have been applied to sensor streams to recognize different daily activities. However, most methods are unable to capture different layers of activities concealed in human behavior. Also, the performance of the models starts to decrease with increasing the number of activities. This research aims at building a hierarchical classification with Neural Networks to recognize human activities based on different levels of abstraction. We evaluate our model on the Extrasensory dataset; a dataset collected in the wild and containing data from smartphones and smartwatches. We use a two-level hierarchy with a total of six mutually exclusive labels namely, "lying down", "sitting", "standing in place", "walking", "running", and "bicycling" divided into "stationary" and "non-stationary". The results show that our model can recognize low-level activities (stationary/non-stationary) with 95.8% accuracy and overall accuracy of 92.8% over six labels. This is 3% above our best performing baseline.
Google has some sweet new artificial intelligence technology that can take elements of a website and convert them into a really slick video. In this multi-channel world we live in, brands spend an awful amount of time and money reformatting content for different platforms. A new project from Google Research, recently published on the Google AI Blog, is called URL2Video. This automatically converts a web page into a short video and the great thing is, it's capable of formatting that video in different aspect ratios, suiting both vertical and horizontal orientations. The tool interrogates the website code and walks the DOM lokoing for multimedia elements, headings, images, video etc that it can leverage to create the content.
Two of my favourite pyramids are the Data Science Hierarchy of Needs and the Minimum Viable Product. Combining them helps us build effective artificial intelligence (AI) proof of concepts in businesses. It also supports building AI competency at the same time as demonstrating Return on Investment (ROI). Monica Rogati introduced the Data Science Hierarchy of Needs in the 2017 Hacker Noon article, The AI Hierarchy of Needs. Rogati uses the pyramid to explain that like in Maslow's Hierarchy of Needs, the essentials are required before you can move towards the ultimate goal.
Geoffrey Hinton is a pioneer in the field of artificial neural networks and co-published the first paper on the backpropagation algorithm for training multilayer perceptron networks. He may have started the introduction of the phrasing "deep" to describe the development of large artificial neural networks. He co-authored a paper in 2006 titled "A Fast Learning Algorithm for Deep Belief Nets" in which they describe an approach to training "deep" (as in a many layered network) of restricted Boltzmann machines. Using complementary priors, we derive a fast, greedy algorithm that can learn deep, directed belief networks one layer at a time, provided the top two layers form an undirected associative memory. This paper and the related paper Geoff co-authored titled "Deep Boltzmann Machines" on an undirected deep network were well received by the community (now cited many hundreds of times) because they were successful examples of greedy layer-wise training of networks, allowing many more layers in feedforward networks.
The theory is that if the model stores information which can be transposed in consistent ways, then that will result in knowledge and some level of intelligence. The main constraints are time and the conservation of energy, but the information transpositions are also very limited. As part of the design, patterns have to become distinct and that is realised by unique paths through the neural structures. The design may now also define uniqueness through the pattern result and not just its links. The earlier designs are still consistent. The between-level boundaries have been moved slightly, but the functionality remains the same, with aggregations and increasing complexity through the layers. The two main models differ in their upper level only. One provides a propositional logic for mutually inclusive or exclusive pattern groups and sequences, while the other provides a behaviour script that is constructed from node types. It can be seen that these two views are complimentary and would allow some control over the behaviour that might get selected.