If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
Machine learning seems to be getting all the interest and hype these days, and some are even saying that it's going mainstream. There are even dedicated conferences and summits for ML just like the 2021 AWS Machine Learning Summit. For ML to go mainstream, in my perspective, there are still real-world lessons we'll need to translate ML into production for businesses, and I was hoping to get some takeaways from this summit. I listed here some parts that made the most impact on me. Hopefully, you'll find these useful when you are planning to apply ML: Since the talk has been organized by AWS, the summit is leaning towards the use of their ML services, but the takeaways here can also be applied to other cloud computing platforms like GCP that offer their own ML services.
ML and AI can be very intimidating for the beginners. As a prerequisite, you should be able to write a little bit of code either in python or R, have some mathematical background and should be able to understand some basic ML jargon. But what's most important is to be guided by the right Machine Learning book. I absolutely love this book. This is the book you need to grok and master machine learning concepts.
The diverse and successful implementations of Artificial Intelligence (AI) across domains have enabled the delivery of augmented and personalised experiences to users at work as well as in their everyday lives. However, in recent times, edge-based AI is upping the ante and offering an enhanced experience by bringing the data and the compute closer to the point of interaction. Most of us are already consuming it in our daily lives. The autocorrect suggestions on our smartphone keyboards or several capabilities of the voice assistant that we use countless times throughout the day are examples of consumer-facing edge-based AI solutions. Edge-based AI uses machine learning algorithms to process data generated by a hardware device (Internet of Things endpoints, gateways, and other devices at the point of use) at the local level instead of sending data to remote servers or the cloud.
This book highlights the latest technologies and applications of Artificial Intelligence (AI) in the domain of construction engineering and management. The construction industry worldwide has been a late bloomer to adopting digital technology, where construction projects are predominantly managed with a heavy reliance on the knowledge and experience of construction professionals. AI works by combining large amounts of data with fast, iterative processing, and intelligent algorithms (e.g., neural networks, process mining, and deep learning), allowing the computer to learn automatically from patterns or features in the data. It provides a wide range of solutions to address many challenging construction problems, such as knowledge discovery, risk estimates, root cause analysis, damage assessment and prediction, and defect detection. A tremendous transformation has taken place in the past years with the emerging applications of AI.
This article is part of "Deconstructing artificial intelligence," a series of posts that explore the details of how AI applications work (In partnership with Paperspace). Deep neural networks have gained fame for their capability to process visual information. And in the past few years, they have become a key component of many computer vision applications. Among the key problems neural networks can solve is detecting and localizing objects in images. Object detection is used in many different domains, including autonomous driving, video surveillance, and healthcare.
If we need to learn one thing about the numerous AI applications around us today, it is that they are examples of "artificial specific intelligence." In other words, they rely on algorithms that are great at very particular tasks, such as selecting a movie based on our watching history or keeping our car in the proper lane on the highway. Because it is so highly specialized, AI greatly outperforms human intelligence in those narrowly defined tasks. Take it from a person who recently spent 50 minutes picking a movie that itself lasted 77 minutes. However, AI's effectiveness at specialized jobs comes at the price of severe context blindness and a general inability to develop meaningful feedback loops: The typical algorithm does not and cannot consider the wider implications of the decisions it makes and hardly affords us users any control over its inner workings.
Before we get into the role of artificial intelligence (AI) and where it is set to take digital commerce, I think we should begin by unpacking a definition of digital commerce – what does it mean? It can be defined as the process of selling and buying products and/or services using digital channels. It includes the people, processes and technologies necessary to execute the offering of product, promotions, pricing, analytics, customer acquisition plus retention, and customer experience at all touchpoints throughout the buying journey. This definition can be applied to all sectors of business irrelevant of area of operations, so it includes banking, retail, automotive, etc. Sector is irrelevant in a world of digital transformation at the speed of COVID – all businesses seek digital channels to market their goods. Today, digital channels have been expanded to relate to digital transformation that is inclusive of people, processes and technologies used throughout the customer buying excursion.
As massive amounts of data are stored every second, it allows for the opportunity to create meaningful and revolutionizing models. This data comes in several forms, including text, images and videos, all allowing for advanced models to be created using techniques such as Deep Learning. Further, using the extensive amount of data, applications using technologies such as computer vision are being used in products such as self-driving cars and facial recognition in phones. When creating a Deep Learning application, one of the first decisions to be made is where the model will be trained, either locally on a machine or through a third-party cloud provider. This is an important decision to be made as it could significantly impact the training time of a model.
The terms "artificial intelligence" and "machine learning" are often used interchangeably, but there's an important difference between the two. AI is an umbrella term for a range of techniques that allow computers to learn and act like humans. Put another way, AI is the computer being smart. Machine learning, however, accounts for how the computer becomes smart. But there's a reason the two are often conflated: The vast majority of AI today is based on machine learning.
TLDR; Machine learning models are only as good as the data (features) they are trained on. In enterprises, data scientists can often train very effective models in the lab - when given a free hand on which data to use. However, many of those data sources are not available in production environments due to disconnected systems and data silos. An AI-powered product that is limited to the data available within its application silo cannot recall historical data about its users or relevant contextual data from external sources. It is like a jellyfish - its autonomic system makes it functional and useful, but it lacks a brain.