This article is a response to an article arguing that an AI Winter maybe inevitable. However, I believe that there are fundamental differences between what happened in the 1970s (the fist AI winter) and late 1980s (the second AI winter with the fall of Expert Systems) with the arrival and growth of the internet, smart mobiles and social media resulting in the volume and velocity of data being generated constantly increasing and requiring Machine Learning and Deep Learning to make sense of the Big Data that we generate. For those wishing to see a details about what AI is then I suggest reading an Intro to AI, and for the purposes of this article I will assume Machine Learning and Deep Learning to be a subset of Artificial Intelligence (AI). AI deals with the area of developing computing systems that are capable of performing tasks that humans are very good at, for example recognising objects, recognising and making sense of speech, and decision making in a constrained environment. The rapid growth in Big Data has driven much of the growth in AI alongside reduced cost of data storage (Cloud Servers) and Graphical Processing Units (GPUs) making Deep Learning more scalable.
Whether it's diagnosing patients or driving cars, we want to know whether we can trust a person before assigning them a sensitive task. In the human world, we have different ways to establish and measure trustworthiness. In artificial intelligence, the establishment of trust is still developing. In the past years, deep learning has proven to be remarkably good at difficult tasks in computer vision, natural language processing, and other fields that were previously off-limits for computers. But we also have ample proof that placing blind trust in AI algorithms is a recipe for disaster: self-driving cars that miss lane dividers, melanoma detectors that look for ruler marks instead of malignant skin patterns, and hiring algorithms that discriminate against women are just a few of the many incidents that have been reported in the past years.
STMicroelectronics and Schneider Electric are demonstrating a prototype IoT sensor that enables new building-management services and efficiency gains by understanding building-occupancy levels and usage. The two companies have collaborated to integrate Artificial Intelligence (AI) into a high-performance people-counting sensor, which overcomes the challenge of monitoring attendance in large spaces with multiple entrance points. Schneider Electric will demonstrate this IoT sensor as a guest ST Live Days, during the IoT&5G session on November 19, 2020. With the digitization of building occupancy, Schneider is following its mission to be its customers' digital partner for sustainability and efficiency by delivering new and highly valuable insights such as queue monitoring to assist smart building management while respecting individuals' privacy by design. The advanced IoT sensor has been developed by combining the high expertise of ST's AI group and the deep sensor-application expertise of Schneider Electric to identify and embed a high-performing object-detection neural network in a small microcontroller (MCU).
In a world filled with technology and artificial intelligence, it is becoming increasingly harder to distinguish between what is real and what is fake. Look at these two pictures below. Can you tell which one is a real-life photograph and which one is created by artificial intelligence? The crazy thing is that both of these images are actually fake, created by NVIDIA's new hyperrealistic face generator, which uses an algorithmic architecture called a generative adversarial network (GANs). Researching more into GANs and their applications in today's society, I found that they can be used everywhere, from text to image generation to even predicting the next frame in a video!
For the second part of this article series, see here. It has only been 8 years since the modern era of deep learning began at the 2012 ImageNet competition. Progress in the field since then has been breathtaking and relentless. If anything, this breakneck pace is only accelerating. Five years from now, the field of AI will look very different than it does today.
When most people hear the term Artificial Intelligence, the first thing they usually think of is robots or some famous science fiction movie like the Terminator depicting the rise of AI against humanity. Artificial intelligence refers to the simulation of human intelligence in machines that are programmed to think like humans and mimic their actions. The term may also be applied to any machine that exhibits traits associated with a human mind such as learning, analyzing, comprehending, and problem-solving. The applications of artificial intelligence in the real-world are perhaps more than what many people know. The ideal characteristic of artificial intelligence is its ability to rationalize and take actions that have the best chance of achieving a specific goal or defined operations. With the advancements of the human mind and their deep research into the field, AI is no longer just a few machines doing basic calculations.
In recent years, the media have been paying increasing attention to adversarial examples, input data such as images and audio that have been modified to manipulate the behavior of machine learning algorithms. Stickers pasted on stop signs that cause computer vision systems to mistake them for speed limits; glasses that fool facial recognition systems, turtles that get classified as rifles -- these are just some of the many adversarial examples that have made the headlines in the past few years. There's increasing concern about the cybersecurity implications of adversarial examples, especially as machine learning systems continue to become an important component of many applications we use. AI researchers and security experts are engaging in various efforts to educate the public about adversarial attacks and create more robust machine learning systems. Among these efforts is adversarial.js,
Welcome to my first blog on topics in artificial intelligence! Here I will introduce the topic of edge computing, with context in deep learning applications. This blog is largely adapted from a survey paper written by Xiaofei Wang et al.: Convergence of Edge Computing and Deep Learning: A Comprehensive Survey. If you're interested in learning more about any topic covered here, there are plenty of examples, figures, and explanations in the full 35 page survery: https://ieeexplore.ieee.org/stamp/stamp.jsp?tp & arnumber 8976180 Now, before we begin, I'd like to take a moment and motivate why edge computing and deep learning can be very powerful when combined: Deep learning is becoming an increasingly-capable practice in machine learning that allows computers to detect objects, recognize speech, translate languages, and make decisions. More problems in machine learning are solved with the advanced techniques that researchers discover by the day.
Every passing year brings the digital world a whole new crop of buzzwords, phrases and technologies. Machine learning has made a significant mark in 2020 with more people getting familiar with the technology and adapting it for better solutions. Machine learning is a form of artificial intelligence that automates data analysis, allowing computers to learn through experience and perform tasks without human invasion or explicit programming. Machine learning is an astonishing technology. Mastering machine learning tools will let people play with data, train models, discover new methods, and create own algorithms.
An increasing number of Twitter and LinkedIn influencers preach why you should start learning Machine Learning and how easy it is once you get started. While it's always great to hear some encouraging words, I like to look at things from another perspective. I don't want to sound pessimistic and discourage no one, I'll just give my opinion. While looking at what these Machine Learning experts (or should I call them influencers?) Maybe the main reason comes from not knowing what do Machine Learning engineers actually do.