If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
On Wednesday at its annual developers conference, the tech giant announced the second generation of its custom chip, the Tensor Processing Unit, optimized to run its deep learning algorithms. In contrast, Nvidia announced its latest generation GPUs in a data center product called the Tesla V100 that deliver 120 teraflops of performance, Nvidia said. Through the Google Cloud, anybody can rent out Cloud TPUs -- similar to how people can rent GPUs on the Google Cloud. "Google's use of TPUs for training is probably fine for a few workloads for the here and now, but given the rapid change in machine learning frameworks, sophistication, and depth, I believe Google is still doing much of their machine learning production and research training on GPUs," said tech analyst Patrick Moorhead.
The computer systems we use today make it easy for programmers to mitigate event latencies in the nanosecond and millisecond time scales (such as DRAM accesses at tens or hundreds of nanoseconds and disk I/Os at a few milliseconds) but significantly lack support for microsecond (μs)-scale events. For instance, when a read() system call to a disk is made, the operating system kicks off the low-level I/O operation but also performs a software context switch to a different thread to make use of the processor during the disk operation. Likewise, various mechanisms (such as interprocessor interrupts, data copies, context switches, and core hops) all add overheads, again in the microsecond range. Finally, queueing overheads--in the host, application, and network fabric--can all incur additional latencies, often on the order of tens to hundreds of microseconds.
The 2017 New Rules for the Digital Age report from Deloitte found that only 5 percent of the companies surveyed said they have strong digital leadership development programs and a clear majority (65 percent) said they have no significant program to drive digital leadership skills. He points to the growing use of organizational network analysis (ONA) as a tool to monitor and quickly identify performance issues in the company. "The concept of a'career' is being shaken to its core, driving companies toward'always-on' learning experiences that allow employees to build skills quickly, easily and on their own terms," the report states. In preparing this year's report Deloitte surveyed over 10,000 HR and business leaders across 140 countries.
It introduces Walter, the latest synthetic android, with intelligence powered by AMD's AMD, -0.85% Ryzen and Radeon processors and manufactured by the film's fictional corporation, Weyland-Yutani. We are developing high-performance compute engines and enabling CPU and GPU processors to support the current and evolving AI algorithm models. The high computational capacity of AMD GPUs make it a great match for machine learning during the processing of large amounts of data to train neural networks. We've given developers more access to our GPU hardware than ever before with our GPUOpen initiative, and we have the Radeon Open Compute software platform to accelerate machine learning, and deep learning frameworks and applications.
Powered by NVIDIA Tesla P100 GPUs and NVIDIA's NVLink high speed multi-GPU interconnect technology, the HGX-1 comes as AI workloads – from autonomous driving and personalized healthcare to superhuman voice recognition -- are taking off in the cloud. Powered by eight NVIDIA Tesla P100 GPUs in each chassis, it features an innovative switching design – based on NVIDIA NVLink interconnect technology and the PCIe standard – enabling a CPU to dynamically connect to any number of GPUs. This allows cloud service providers that standardize on a single HGX-1 infrastructure to offer customers a range of CPU and GPU machine instance configurations to meet virtually any workload. The HGX-1 Hyperscale GPU Accelerator reference design is highly modular, allowing it to be configured in a variety of ways to optimize performance for different workloads.
Last November, when Google announced that machine learning research luminary Fei-Fei Li, Ph.D. would join Google's Cloud Group Platform group, a lot was known about her academic work. At this point in time, most enterprises do not have the technical capabilities to build and train custom machine learning models that would utilize the Machine Learning Engine. These companies can apply machine learning with Google's pre-trained models (full list) using APIs to add machine learning capability to their applications, such as understanding natural language, images and natural language. Li drew on her experience building the open-source ImageNet data set of over 15 million labeled images that enabled advances in deep learning research.
Providing hyperscale data centers with a fast, flexible path for AI, the new HGX-1 hyperscale GPU accelerator is an open-source design released in conjunction with Microsoft's Project Olympus. It will enable cloud-service providers to easily adopt NVIDIA GPUs to meet surging demand for AI computing." NVIDIA Joins Open Compute Project NVIDIA is joining the Open Compute Project to help drive AI and innovation in the data center. Certain statements in this press release including, but not limited to, statements as to: the performance, impact and benefits of the HGX-1 hyperscale GPU accelerator; and NVIDIA joining the Open Compute Project are forward-looking statements that are subject to risks and uncertainties that could cause results to be materially different than expectations.
Nvidia has lead much of this with their TK1 and TX1 modules; now, with the new release of the Jetson TX2, the AI capabilities we have access to have just doubled. Nvidia is focusing the high-performance, low-power board on "AI at the edge" -- building artificial intelligence processing capabilities into products directly, rather than sending data to cloud-based super computers. For the release event, Nvidia brought 18 demos from various groups to show how they're using TX architecture, including an onstage demo from Cisco of their new Spark Board collaboration tool. A large flatscreen display, it connected the Cisco rep with his companion Cisco team in Norway, who then showed a new camera device for the Spark Board that automatically recognizes and labels the names of the people in a conference, and automatically crops in to smaller groups to eliminate that "large empty conference room" aesthetic that we've become accustomed to with teleconferencing.
"AI has never been more widely used, with many of the largest technology companies providing integrated AI platforms. This democratisation of AI is allowing small businesses to more easily add advanced data analytics and machine learning to their processes." "Empowering businesses to make better decisions because of AI, improving their process efficiency, increasing quality and accuracy, and driving performance enhancements, will deliver significant cost savings," he predicts. Discussions around how tech developments can benefit your small business are taking place at QuickBooks Connect, a two-day conference for SMEs and accountants looking to network, collaborate and grow.
The startup employs machine learning to help balance performance, availability and cost for enterprise cloud computing. The company combs over a myriad cloud data to ensure that a company's infrastructure is optimized for its overarching business priorities. Different types of cloud infrastructure data are created in different time intervals; some hourly, others daily, etc. Using YotaScale, enterprises like Apigee and Zenefits can ideally rely on machines to manage their cloud computing needs, taking a load off cloud and DevOps teams.