If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
It seems like even the biggest hyperscale platform developers who have long touted software-defined architectures as the key to computing nirvana are starting to learn a cardinal rule of infrastructure: No matter how much you try to abstract it, basic hardware still matters. A key example of this is Google's Tensor Processing Unit (TPU), which the company designed specifically for machine learning and other crucial workflows that were starting to push the limits of available CPUs and GPUs. In fact, the company says that without the TPU, it was looking at doubling its data center footprint in order to support applications like voice recognition and image search. The TPU is custom-designed to work with the TensorFlow software library, generating results 15 to 30 times faster than state-of-the-art Intel Haswell or Nvidia K80 devices. This may seem like a harbinger of bad times ahead for Intel and Nvidia, but the broader picture is a bit more muddled.
The public cloud division of Chinese ecommerce company Alibaba Group today is introducing new artificial-intelligence (AI) services targeting two specific industries, health care and manufacturing. The Alibaba Cloud is touting an ET Medical Brain and an ET Industrial Brain, each of which encompasses a number of services. The latter will give companies tools for monitoring the production process, improving energy efficiency, and predicting when maintenance will be needed. Also today, Alibaba is announcing the launch of version 2.0 of its PAI machine learning service. Alibaba introduced the original PAI in 2015.
Furthermore, the existing category leaders driving billions of dollars of compute heavy workload revenue in the legacy on-premise high performance computing (HPC) market are facing the innovator's dilemma needing to reinvent their entire business to provide effective Big Compute solutions in the space – providing a unique opportunity for the most innovative companies to become category leaders. Just like Big Data removed constraints on data and transformed major enterprise software categories, Big Compute eliminates constraints on compute hardware and provides the ability to scale computational workloads seamlessly on workload-optimized infrastructure configurations without sacrificing performance. A comprehensive Big Compute stack now enables frictionless scaling, application-centric compute hardware specialization, and performance-optimized workloads in a seamless way for both software developers and end-users. Specifically, Big Compute transforms a broad set of full-stack software services on top of specialty hardware into a software-defined layer, which enables programmatic high performance computing capabilities at your fingertips, or more likely, as back-end function evaluations part of software you touch every day.
Technology professionals are changing the rules of doing business – providing key data insights for informed decision-making, reimagining customer interaction, securing customer data, and enhancing operational scalability. Third-party services such as OpenStack help increase the computing power and building reactive microservices in order to compartmentalize and develop a scalable, resilient environment to continuously deploy software architecture using cloud computing tools. Technology professionals are changing the rules of doing business – providing key data insights for informed decision-making, reimagining customer interaction, securing customer data, and enhancing operational scalability. Third-party services such as OpenStack help increase the computing power and building reactive microservices in order to compartmentalize and develop a scalable, resilient environment to continuously deploy software architecture using cloud computing tools.
From types of machine intelligence to a tour of algorithms, Frank Chen ( head of a16z's deal, research, and investing team) walks us through the basics (and beyond) of AI and deep learning in this enormously popular slide presentation/ video. And some of the most exciting technology innovations are now happening at the infrastructure level: It's changing everything, from how new tech is created to how new tech is sold. Understanding them not only helps build better products, but helps build moats and protect software companies against competitors' eating away at their margins. How does one create network effects in different businesses?
Now, Advanced Micro Devices (AMD) is the process of computing its prowess to release the next big thing in the server market - the machine learning and artificial intelligence. The new machine learning is called "Radeon Instinct." The upcoming device is said to provide server designers and developers a compelling set of infrastructure to track the machine learning. The new Radeon Instinct is composed of hardware and software aspects that have the ability to deliver a full machine intelligence platform. AMD wants that all industries like financial services, life sciences, and the cloud that are now in the machine learning solutions and infrastructure will use Radeon Instinct for their computing requirements.
Cloud Machine Learning (ML) Platforms: Technologies like Azure Machine Learning, AWS Machine Learning and the upcoming Google Cloud Machine Learning enable the creation of machine learning models using a specific technology. AI Cloud Services: Technologies like IBM Watson, Microsoft Cognitive Services, Google Cloud Vision or Natural Language APIs enable abstract complex AI or cognitive computing capabilities via simple API calls. Cloud Machine Learning (ML) Platforms: Technologies like Azure Machine Learning, AWS Machine Learning and the upcoming Google Cloud Machine Learning enable the creation of machine learning models using a specific technology. AI Cloud Services: Technologies like IBM Watson, Microsoft Cognitive Services, Google Cloud Vision or Natural Language APIs enable abstract complex AI or cognitive computing capabilities via simple API calls.
Machine learning and artificial intelligence have arrived in the data center, changing the face of the hyperscale server farm as racks begin to fill with ASICs, GPUs, FPGAs and supercomputers. These technologies provide more computing horsepower to train machine learning systems, a process that involved enormous amounts of data-crunching. The end goal is to create smarter applications, and improve the services you already use every day. "Artificial intelligence is now powering things like your Facebook Newsfeed," said Jay Parikh, Global Head of Engineering and Infrastructure for Facebook. "It is helping us serve better ads.
The cloud computing market is a race vastly dominated by four companies: Amazon, Microsoft, Google and IBM with a few other platforms with traction in specific regional markets such as AliCloud in China. In this sense, cloud platforms were not required to provide the runtime to run IoT or mobile platforms but rather services that enable the backend capabilities of those solutions. Contrasting with that model, AI applications require not only sophisticated backend services but a very specific runtime optimized for the GPU intensive requirements of AI solutions. Cloud computing is a well-established technology trend vastly dominated by companies like Amazon, Microsoft and Google.