If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
When I ask people what they think the Internet of Things (IoT) is all about, the vast majority will say "smart homes," probably based on personal experience. If I say that it is also about industries making using of data from sensors, then most people's immediate reaction is to think of manufacturing. Sensors have been used for a long time in manufacturing, and the concept of using data generated at the edge to monitor and run automated processes is well understood. This perception, however, is underselling the IoT. In practice, it can be applied anywhere.
Nvidia is no stranger to data crunching applications of its GPU architecture. It's been dominating the AI deep learning development space for years and sat rather comfortably in the scientific computing sphere too. But now it's looking to take on the field of machine learning, a market that accounts for over half of all data projects being undertaken in the world right now. To do so, it's launched dedicated machine learning hardware in the form of the DGX-2 and new open-source platform, Rapids. Designed to work together as a complete end-to-end solution, Nvidia's GPU-powered machine learning platform is set to completely change how institutions and businesses crunch and understand their data.
Microsoft Azure platform has introduced many resourceful cloud computing services which can be acquired with paid Azure subscription plans in order to fulfill our target cloud infrastructure requirements. Whether we want to set virtual machines, mail servers, storage servers, perform artificial intelligent computing or machine learning servers and workspace, Microsoft Azure is a complete tool. Implementing machine learning algorithms is a very difficult task. In other words, machine learning solutions, in general, will take almost 90% of our efforts & focus to improve our solution's accuracy, while the remaining 10% of our efforts & focus may or may not work towards application implementation of our solution. So, Machine learning domains are both resource & time consuming yet very satisfying, since, both machine learning and artificial intelligence are the only domains that are capable of solving problems beyond our imagination.
On the heels of its dual announcement at the Open Compute Project Summit in Amsterdam this week (see related story), Xilinx yesterday disclosed that AMD and Xilinx have teamed to set an AI inference processing record of 30,000 images per second. The joint work of the two companies, announced at the Xilinx Developer Forum in San Jose by Xilinx CEO Victor Peng and AMD CTO Mark Papermaster, connects AMD's EPYC CPUs and the new Xilinx Alveo FPGA accelerator card, announced yesterday at the OCP Summit. The record, running a batch size of 1 and Int8 precision, was accomplished on a system that leverages two AMD EPYC 7551 server CPUs with PCIe connectivity, along with eight Alveo U250 accelerator cards. In a blog post, Xilinx said the inference performance is powered by Xilinx ML Suite, which allows developers to optimize and deploy accelerated inference and supports various machine learning frameworks, such as TensorFlow. The benchmark was performed on the GoogLeNet convolutional neural network.
You've probably heard of fingerprint scans, iris scans, and perhaps even eye gaze scans, but what about foostep-based biometrics? New research published on the preprint server Arxiv.org Researchers at the Indian Institute of Technology in Delhi describe the system in a paper titled "Person Identification using Seismic Signals generated from Footfalls." It's based on a fog computing architecture, which employs edge devices to carry out much of the computing, storage, and communication involved in data collection. "[With our approach], individuals are only required to walk through the active region of the sensor," they wrote.
When I ask people what they think the Internet of Things (IoT) is all about, the majority say, "smart homes", probably based on personal experience with Alexa or Siri. If I say that it's also about industries making using of sensor data, most think of manufacturing. Sensors have been used for a long time in manufacturing, and the concept of using data generated at the edge to monitor and run automated processes is well understood. But this is underestimating the potential of IoT. In practice, IoT can be applied anywhere.
SAN JOSE, Calif--September 10, 2018–Artificial intelligence (AI) and machine learning (ML) are opening up new ways for enterprises to solve complex problems. But they will also have a profound effect on the underlying infrastructure and processes of IT. According to Gartner, "only 4% of CIOs worldwide report that they have AI projects in production." And when it does, IT will struggle to manage new workloads, new traffic patterns, and new relationships within their business. To help enterprises address these emerging challenges, Cisco is unveiling its first server built from the ground up for AI and ML workloads.
The common narrative of artificial intelligence is that it has finally taken off in recent years because there was enough data -- from mega repositories like Google -- and enough computing power through racks of servers equipped with fast processors and GPUs. That's not incorrect, but it's too simplistic to describe the future of machine learning and other forms of AI. That was the message from Intel's CTO of AI products, Amir Khosrowshahi, at VentureBeat's Transform 2018 conference outside San Francisco today. The challenge now is optimizing the whole process. Better algorithms require less computing and can draw accurate inferences from less data, said Khosrowshahi, cofounder of AI company Nervana Systems, which Intel acquired in August 2016.
This is an eclectic collection of interesting blog posts, software announcements and data applications I've noted over the past month or so. ONNX Model Zoo is now available, providing a library of pre-trained state-of-the-art models in deep learning in the ONNX format. In the 2018 IEEE Spectrum Top Programming Language rankings, Python takes the top spot and R ranks #7. Julia 1.0 has been released, marking the stabilization of the scientific computing language and promising forwards compatibility. Google announces Cloud AutoML, a beta service to train vision, text categorization, or language translation models from provided data.
Machine learning can become a robust analytical tool for vast volumes of data. The combination of machine learning and edge computing can filter most of the noise collected by IoT devices and leave the relevant data to be analyzed by the edge and cloud analytic engines. The advances in Artificial Intelligence have allowed us to see self-driving cars, speech recognition, active web search, and facial and image recognition. Machine learning is the foundation of those systems. It is so pervasive today that we probably use it dozens of times a day without knowing it.