If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
The pandemic has accelerated the adoption of edge computing, or computation and data storage that's located close to where it's needed. According to the Linux Foundation's State of the Edge report, digital health care, manufacturing, and retail businesses are particularly likely to expand their use of edge computing by 2028. This is largely because of the technology's ability to improve response times and save bandwidth while enabling less constrained data analysis. While only about 10% of enterprise-generated data is currently created and processed outside a traditional datacenter or cloud, that's expected to increase to 75% by 2022, Gartner says. Internet of things (IoT) devices alone are expected to create over 175 zettabytes of data in 2025.
What is Edge AI? What are some applications of this technology? Edge Computing runs processes locally on the device itself, instead of running them in the cloud. This reduced computing time allows data to be processed much faster, removes the security risk of transferring the data to a cloud-based server, and reduces the cost of data transfer, as well as the risks of bandwidth outages disrupting performance. Computer vision and AI at the edge are becoming instrumental in powering everything from factory assembly lines and retail inventory management, to hospital urgent care medical imaging equipment like X-ray and CAT scans. Drones, security cameras, robots, facial recognition on cell phones, self-driving vehicles, and more all utilize this technology as well.
Data-powered enterprises use data, analytics, and AI to fulfill their corporate purpose, achieve their business objectives, and drive innovation. And in today's shaky world, they need to be like water to get there: adaptive by design, hyper-agile, super-responsive. There's a place they find themselves increasingly to build and use these water-like capabilities: At the very outskirts of the business, far from central governance and IT, close to clients, partners, employees, the shop, the factory, the truck. It's out there in the field where up-to-date data needs to be collected in real-time, analyzed on the spot, and turned into split-second actions, without being dependent on a corporate backbone that simply is too slow and too far away. Edge AI – the use of AI in physical devices and all sort of other'things' – is a perfect enabler for these new dynamics: Equipped with sensors and microprocessors, even the tiniest of things can collect data, analyze it with powerful AI algorithms (such as deep learning) and turn it into immediate action. And all of that is done with specialized technology that is typically inexpensive and has a small footprint in terms of power consumption.
These days, companies are using cloud services to receive and process the data they gather from sensors, cameras, and services. However, the amount of data is getting so massive that sending them and managing them is becoming increasingly expansive. This is where Edge AI comes in, a combination of Edge Computing and Artificial Intelligence. Edge AI is a system of AI-equipped chips that are on board multiple devices. These devices can be installed and set up much closer to the sources of data. Although these chips process with less processing power and maybe slower action, they can provide invaluable services in terms of receiving and processing the data.
Artificial intelligence (AI) technologies have dramatically advanced in recent years, resulting in revolutionary changes in people's lives. Empowered by edge computing, AI workloads are migrating from centralized cloud architectures to distributed edge systems, introducing a new paradigm called edge AI. While edge AI has the promise of bringing significant increases in autonomy and intelligence into everyday lives through common edge devices, it also raises new challenges, especially for the development of its algorithms and the deployment of its services, which call for novel design methodologies catered to these unique challenges. In this paper, we provide a comprehensive survey of the latest enabling design methodologies that span the entire edge AI development stack. We suggest that the key methodologies for effective edge AI development are single-layer specialization and cross-layer co-design. We discuss representative methodologies in each category in detail, including on-device training methods, specialized software design, dedicated hardware design, benchmarking and design automation, software/hardware co-design, software/compiler co-design, and compiler/hardware co-design. Moreover, we attempt to reveal hidden cross-layer design opportunities that can further boost the solution quality of future edge AI and provide insights into future directions and emerging areas that require increased research focus.
Microsoft is rolling out its AI-powered live transcription service in Microsoft Teams, answering Zoom's recent expansion of its own live transcription feature. Competition remains fierce in the video-meeting market for live transcription functionality. In December, Cisco announced its closed captioning service for Webex; Zoom last month brought live transcription to free accounts, albeit in a limited manner; Google just brought live captions to Chrome, and now Microsoft is offering its own take on live transcriptions in Teams. Microsoft Teams live transcription can identify each speaker, and captures audio in "near real time" to provide a record of what's said during and after the meeting. "Delivering live transcription with high accuracy, minimal latency, and cost efficiency at enterprise scale has been one of the toughest challenges in the industry," says Microsoft's Shalendra Chhabra, the lead on conversational AI for Microsoft Teams meetings.
"Snitch Software" Will Go Wider. "Snitch software" is a newer trend that has just started sneaking into auditing practices. In a recent example from a court case between a vendor and a consumer, the vendor placed Piracy Detection and Reporting Security Software (PDRSS) into their product to track when unlicensed software is used from an IP address. Eventually, the vendor audited the consumer and explained that their software was being used incorrectly, but the consumer argued that had not been proven. This essentially led to the vendor explaining their implemented PDRSS, which led to a privacy and permissions dispute.
GPUs are great for tasks that can be broken up into multiple parts and processed in parallel. If you think of the central processing (CPU) of your laptop as its'brain', the GPU is like a swarm of tiny, specialized'brains'. Chipmakers are cranking up their GPUs to keep up with the exploding demand for AI in everything from chatbots to the computer vision of guided missiles. Industry leader Nvidia reported $5 billion revenue in the last quarter. Amid the heady commercial success of GPU makers, it is hard to make a business case for a new approach.
IDC estimates that global budgets for Artificial Intelligence will double over the next four years, to $110 billion in 2024, per its recent Worldwide Artificial Intelligence Spending Guide. "Companies will adopt AI -- not just because they can, but because they must," IDC's AI program vice president Ritu Jyoti noted. "AI is the technology that will help businesses to be agile, innovate, and scale." The arrival of AI capabilities in the enterprise is no longer theoretical. "The last year has demonstrated a rapid acceleration that has changed the question from'Where do artificial intelligence technologies fit within our organization?'
Dr. Eli David is a leading AI expert specializing in deep learning and evolutionary computation. He is the Co-Founder of DeepCube. Today, there are two possible deployments of deep learning technology: in the cloud and at the edge, directly on a device. A majority of these deployments rely on the cloud, due to the extensive requirements of processing power and memory consumption, as well as the size of AI models. While cloud deployments allow AI to benefit from the power of high-performance computing systems, challenges remain. Privacy concerns arise with the need for data to be sent back and forth from device to cloud for processing, and there are limitations due to latency, bandwidth, and connectivity.