On Tuesday, the White House released a chilling report on AI and the economy. It began by positing that "it is to be expected that machines will continue to reach and exceed human performance on more and more tasks," and it warned of massive job losses. Yet to counter this threat, the government makes a recommendation that may sound absurd: we have to increase investment in AI. The risk to productivity and the US's competitive advantage is too high to do anything but double down on it. This approach not only makes sense, but also is the only approach that makes sense.
Today Mellanox announced that one of China's leading intelligent speech and language technologies' companies, iFLYTEK, has chosen Mellanox's end-to-end 25G and 100G Ethernet solutions based on ConnectX adapters and Spectrum switches for their next generation machine learning center. The partnership between Mellanox and iFLYTEK will enable iFLYTEK to achieve a high speech recognition rate of 97 percent. Mellanox's solution has enabled iFLYTEK to build a next generation machine learning center that will be accelerate our application performance and provide us with our future needs," said Dr. Zhiguo Wang, executive vice president of iFLYTEK Research Institute. "Moreover, we leverage the scalability of Mellanox Ethernet solutions to grow our compute and storage needs in the most efficient manner." To support a diverse number and growing type of applications, iFLYTEK requires a high performance and efficient data center network solution that needs to be both compatible with the company's current infrastructure and scalable for future computing and storage requirements.
As an example, mobile network operators are increasing their investment in big data analytics and machine learning technologies as they transform into digital application developers and cognitive service providers. With a long history of handling huge datasets, and with their path now led by the IT ecosystem, mobile operators will devote more than $50 billion to big data analytics and machine learning technologies through 2021, according to the latest global market study by ABI Research. Machine learning can deliver benefits across telecom provider operations with financially-oriented applications - including fraud mitigation and revenue assurance - which currently make the most compelling use cases. Predictive machine learning applications for network performance optimization and real-time management will introduce more automation and efficient resource utilization.
With growing interest in neural networks and deep learning, individuals and companies are claiming ever-increasing adoption rates of artificial intelligence into their daily workflows and product offerings. Coupled with breakneck speeds in AI-research, the new wave of popularity shows a lot of promise for solving some of the harder problems out there. That said, I feel that this field suffers from a gulf between appreciating these developments and subsequently deploying them to solve "real-world" tasks. A number of frameworks, tutorials and guides have popped up to democratize machine learning, but the steps that they prescribe often don't align with the fuzzier problems that need to be solved. This post is a collection of questions (with some (maybe even incorrect) answers) that are worth thinking about when applying machine learning in production.
Mobile internet applications are evolving rapidly. Cognitive computing technologies will inspire telecom service providers to profoundly change their business model in new creative ways. Deploying intelligent voice control apps on smartphones was just the beginning of this trend. As an example, mobile network operators are increasing their investment in big data analytics and machine learning technologies as they transform into digital application developers and cognitive service providers. With a long history of handling huge datasets, and with their path now led by the IT ecosystem, mobile operators will devote more than $50 billion to big data analytics and machine learning technologies through 2021, according to the latest global market study by ABI Research.
In more complex situations, a machine learning algorithm may build a complex model based on big data from transactions across the entire population of users to improve the accuracy of fraud detection. Developers can extract maximum performance from Intel hardware by using the library of math kernels and optimized algorithms from Intel called Intel Data Analytics Acceleration Library (Intel DAAL) and Intel Math Kernel Library (Intel MKL). By using the Intel-optimized frameworks supported by Intel MKL, I've seen customers get performance on deep learning network topologies including convolutional neural networks (CNN) and recurrent neural networks (RNN) that is an order of magnitude greater than running these frameworks un-optimized on commodity CPUs. For qualified organizations, we can provide test and development platforms based on the Intel Xeon Phi processor, software, tools and training, as well as reference architectures and blueprints to accelerate the deployment of enterprise-grade solutions.
Machine learning software is set to transform the way IT professionals manage their infrastructures in large Australian organisations, by seeking out potential problems before they affect any single user, Bede Hackney, ANZ managing director at Nimble Storage, says. Machine learning replaces traditional IT systems that require constant monitoring of each component, which means technicians don't have to waste time working out where the fault is and forming a solution. According to Hackney, the'app-data gap' is the challenge IT management faces when gaps between application and data stores become a problem because of the many differing IT infrastructure components. "A major app-data gap can often disrupt data delivery, degrade worker productivity, create customer dissatisfaction and damage a company's overall speed of business. However, it can be difficult to quickly find a solution because the factors leading to application slowdowns can come from a range of issues across the infrastructure stack", Hackney says.
With virtualized environments performance issues can be hard to pinpoint. IT departments can find it difficult to spot whether the cause is in the application, network, storage, or virtualization layer of the infrastructure. Software optimization specialist SIOS is bringing machine learning to bear on this problem with the latest release of SIOS iQ, its analytics software for VM environments. For the first time, IT staff can easily identify and resolve the root causes of performance issues based on analysis of both the VMware infrastructure and the SQL Server application environment. Other advances enable users to accurately predict and forecast performance and capacity utilization, improve efficiency by identifying and resizing under- and over-provisioned VMs, and save datastore capacity by instantly identifying rogue disk files (VMDKs).
At the tail end of Google's keynote speech at its developer conference Wednesday, Sundar Pichai, Google's CEO mentioned that Google had built its own chip for machine learning jobs that it calls a Tensor Processing Unit, or TPU. The boast was that the TPU offered "an order of magnitude" improvement in the performance per watt for machine learning. Any company building a custom chip for a dedicated workload is worth noting, because building a new processor is a multimillion-dollar effort when you consider hiring a design team, the cost of getting a chip to production and building the hardware and software infrastructure for it. However, Google's achievement with the TPU may not be as earth shattering or innovative as it might seem given the coverage in the press. To understand what Google has done, it's important to understand a bit about how machine learning works and the demands it makes on a processor.
Having made the improbable jump from the game console to the supercomputer, GPUs are now invading the datacenter. This movement is led by Google, Facebook, Amazon, Microsoft, Tesla, Baidu and others who have quietly but rapidly shifted their hardware philosophy over the past twelve months. Each of these companies have significantly upgraded their investment in GPU hardware and in doing so have put legacy CPU infrastructure on notice. The driver of this change has been deep learning and machine intelligence, but the movement continues to downstream into more and more enterprise-grade applications – led in part by the explosion of data. Behind this shift is an evolving perspective of how computing should operate-- one that has a particular emphasis on massive quantities of data, machine learning, mathematics, analytics and visualization.