If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
In March the company bought a startup cofounded by Geoffrey Hinton, a University of Toronto computer science professor who was part of the team that won the Merck contest. Extending deep learning into applications beyond speech and image recognition will require more conceptual and software breakthroughs, not to mention many more advances in processing power. Programmers would train a neural network to detect an object or phoneme by blitzing the network with digitized versions of images containing those objects or sound waves containing those phonemes. A team led by Stanford computer science professor Andrew Ng and Google Fellow Jeff Dean showed the system images from 10 million randomly selected YouTube videos.
In November, Google researchers published a paper in JAMA showing that Google's deep learning algorithm, trained on a large data set of fundus images, can detect diabetic retinopathy with better than 90 percent accuracy. Just a couple of months ago, the company launched the Healthcare NExT initiative, which brings together artificial intelligence, cloud computing, research and industry partnerships. Last month, Alphabet-owned Verily launched the Project Baseline Study, a collaborative effort with Stanford Medicine and Duke University School of Medicine to amass a large collection of broad phenotypic health data in hopes of developing a well-defined reference of human health. "If the government did data quality and data sharing initiatives, it would be a lot different," Andrew Maas, chief scientist at Roam Analytics (a San Francisco-based machine learning analytics platform provider focused on life sciences) said at the Light Forum.
Each TPU has four chips that delivers 180 trillion of floating points performance per second, if this was not enough Google combined 64 of these TPUs together using patented high speed network to create machine learning supercomputer called TPU pod. Remember, Google's real innovation has been on hardware patents in high end cloud computing, chips, servers, networking for its own data centers. Google has been unsuccessful in social media space, but is now using machine learning to help users share photos, even suggesting whom to share it with. Google has search data, complete email conversation data, photos, and location data.
Google does not plan to manufacture and sell the chip like Intel (intc) or AMD (amd), but instead will let companies rent access to the chip via Google's cloud computing service. Google's new chip comes amid fierce competition with cloud computing rivals like Amazon (amzn), Microsoft (msft), and IBM (ibm) that sell on-demand computing resources to businesses. The new chip performs two tasks related to artificial intelligence projects, including the training of data and making sense of the data, known as inference, Dean said. Dean also said that Google would give the "top machine learning researchers" access to 1,000 free TPUs via a new cloud computing service for academics who are researching AI.
As the rise of massively distributed computing power, decreased cost of data storage, and a proliferation of open-source frameworks turn conventional computing paradigms on their head, new and lucrative opportunities are being created to develop innovative artificial intelligence applications, writes Ed Chater, COO, Adbrain. Machine Learning, another subset of AI, focuses on building computer programs that can determine'patterns' in data. Artificial intelligence will radically transform industries including healthcare, finance, insurance, and entertainment, and have a profound impact on much more. With many of the technologies underpinning AI (compute, data storage, learning algorithms) becoming commoditised, the focus is shifting from excitement around the'tech' potential of machine learning to practitioners actually building applications and putting them into production.
At the keynote address of this year's I/O developer conference, Google's CEO announced that the company will be selling AI computer chips, called Cloud Tensor Processing Units (TPUs), via Google Cloud service. Bloomberg noted that Google created the chip to address issues around the high cost and high demand on computing power machine learning took up in the company's data centers. The news of the chip comes along with the announcement of machine learning innovations across Google's products, reportedly including a new photo editing tool, features for Google Assistant and a new web portal for the company's AI plays. Buyers will need to sign up for a Google cloud service, run their tasks and store their data on Google equipment, noted Bloomberg, in order to get the Cloud TPU chip.
There is an effort underway to standardize and improve access across all layers of the machine learning stack, including specialized chipsets, scalable computing platforms, software frameworks, tools and ML algorithms. "Just like cloud computing ushered in the current explosion in startup … machine learning platforms will likely power the next generation of consumer and business tools." This is where public cloud services such as Amazon Web Services (AWS), Google Cloud Platform, Microsoft Azure and others come in. Just like cloud computing ushered in the current explosion in startups, the ongoing build-out of machine learning platforms will likely power the next generation of consumer and business tools.
At Google's annual developer conference today, Pichai introduced a project called AutoML coming out of the company's Google Brain artificial intelligence research group. "This is a very exciting development," Pichai tells MIT Technology Review, in an e-mail. On the image task, their system rivaled the best architectures designed by human experts. Researchers at Google's other AI research division, DeepMind, in academia, and the Elon Musk-backed nonprofit OpenAI are exploring related concepts (see "AI Software Learns to Make AI Software").
Until recently, big companies focused on adding AI capabilities to their own products -- think about your smartphone transcribing your voice and Facebook identifying the faces in your photos. Tests show that these chips can execute machine learning code up to 30 times faster than conventional computer chips. Amazon currently leads the cloud computing market with its Amazon Web Services, and it is offering developers a rival suite of machine learning tools. Because in their rush to win the cloud computing war, these technology giants are making more and more powerful AI capabilities available to anyone who wants to use them.
The coming era will be defined by machine and deep learning and artificial intelligence, built on top of the mobile/cloud model. Computing has moved from massive mainframe access by terminals, to databases and personal computers, to the cloud and mobile devices. As Microsoft has shown, machine learning models can be moved to the edge by bringing artificial intelligence capabilities that used to only be able to run in the cloud to the device. This is done by building some compute in to edge devices (such as CPUs and GPUs etc., as we have seen with IoT maturation) and by bringing cloud computing capabilities to the edge through virtual machines and Docker-style containerization.