If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
A new $240 million center at MIT may help advance the field of artificial intelligence by developing novel devices and materials to power the latest machine-learning algorithms. The project, announced by IBM and MIT today, will research new approaches in deep learning, a technique in AI that has led to big advances in areas such as machine vision and voice recognition. But it will also explore completely new computing devices, materials, and physical phenomena, including efforts to harness quantum computers--exotic but potentially very powerful new machines--to make AI even more capable. And it will study the economic impact of artificial intelligence and automation, a hugely significant issue for society.
Each TPU has four chips that delivers 180 trillion of floating points performance per second, if this was not enough Google combined 64 of these TPUs together using patented high speed network to create machine learning supercomputer called TPU pod. Remember, Google's real innovation has been on hardware patents in high end cloud computing, chips, servers, networking for its own data centers. Google has been unsuccessful in social media space, but is now using machine learning to help users share photos, even suggesting whom to share it with. Google has search data, complete email conversation data, photos, and location data.
There is an effort underway to standardize and improve access across all layers of the machine learning stack, including specialized chipsets, scalable computing platforms, software frameworks, tools and ML algorithms. "Just like cloud computing ushered in the current explosion in startup … machine learning platforms will likely power the next generation of consumer and business tools." This is where public cloud services such as Amazon Web Services (AWS), Google Cloud Platform, Microsoft Azure and others come in. Just like cloud computing ushered in the current explosion in startups, the ongoing build-out of machine learning platforms will likely power the next generation of consumer and business tools.
Until recently, big companies focused on adding AI capabilities to their own products -- think about your smartphone transcribing your voice and Facebook identifying the faces in your photos. Tests show that these chips can execute machine learning code up to 30 times faster than conventional computer chips. Amazon currently leads the cloud computing market with its Amazon Web Services, and it is offering developers a rival suite of machine learning tools. Because in their rush to win the cloud computing war, these technology giants are making more and more powerful AI capabilities available to anyone who wants to use them.
With major technology companies and startups seriously embracing Cloud strategies, now is the perfect time to attend @CloudExpo @ThingsExpo, June 6-8, 2017, at the Javits Center in New York City, NY and October 31 - November 2, 2017, Santa Clara Convention Center, CA. Join Cloud Expo / @ThingsExpo conference chair Roger Strukhoff (@IoT2040), June 6-8, 2017, at the Javits Center in New York City, NY and October 31 - November 2, 2017, Santa Clara Convention Center, CA for three days of intense Enterprise Cloud and'Digital Transformation' discussion and focus, including Big Data's indispensable role in IoT, Smart Grids and (IIoT) Industrial Internet of Things, Wearables and Consumer IoT, as well as (new) Digital Transformation in Vertical Markets. Accordingly, attendees at the upcoming 20th Cloud Expo / @ThingsExpo June 6-8, 2017, at the Javits Center in New York City, NY and October 31 - November 2, 2017, Santa Clara Convention Center, CA will find fresh new content in a new track called FinTech, which will incorporate machine learning, artificial intelligence, deep learning, and blockchain into one track. The upcoming 20th International @CloudExpo @ThingsExpo, June 6-8, 2017, at the Javits Center in New York City, NY and October 31 - November 2, 2017, Santa Clara Convention Center, CA announces that its Call For Papers for speaking opportunities is open.
In other words, he hopes the new chip and the new service will set Google's cloud business apart from services offered by its main rivals, including Amazon and Microsoft, the unnamed competitive threat underlying his I/O keynote. Between its two AI labs--Google Brain, based at company headquarters in Silicon Valley, and DeepMind, a London AI startup Google purchased a little more than three years ago--Google is leading the new wave of artificial intelligence research and development so rapidly changing entire industries and economies. But the company believes cloud computing--where computing power is rented over the internet to businesses and software developers--could one day bring in far more. Google built its new chip as a better way of serving its own AI services, most notably Google Translate, says Jeff Dean, the uber-engineer who oversees Google Brain, the company's main AI lab.
Dubbed TPU 2.0 or the Cloud TPU, the new chip is a sequel to a custom-built processor that has helped drive Google's own AI services, including its image recognition and machine translation tools, for more than two years. Amazon and Microsoft offer GPU processing via their own cloud services, but they don't offer bespoke AI chips for both training and executing neural networks. Companies and developers undertake this training with help from GPUs, sometimes thousands of them, running inside the massive computer data centers that underpin the world's internet services. Training on traditional CPU processors--the generalist chips inside the computer servers that drive online software--just takes too much time and electrical power.
The global technology sector is on the brink of an explosion of data, according to top executives at Intel, who say the growth of cloud computing will be accelerated by new technologies like artificial intelligence (AI), the Internet of Things, virtual reality, drones, robots and autonomous vehicles. "We're shifting from a server and CPU company to a data center company that builds high performance racks," said Intel CEO Brian Krzanich. A year ago Intel projected enterprise server CPU sales would see "mid single digit" annual growth rate. Those sales are shifting to cloud service providers, which are growing at a 24 percent annual rate, and communications service providers, where annual growth is averaging 19 percent.
Providing hyperscale data centers with a fast, flexible path for AI, the new HGX-1 hyperscale GPU accelerator is an open-source design released in conjunction with Microsoft's Project Olympus. It will enable cloud-service providers to easily adopt NVIDIA GPUs to meet surging demand for AI computing." NVIDIA Joins Open Compute Project NVIDIA is joining the Open Compute Project to help drive AI and innovation in the data center. Certain statements in this press release including, but not limited to, statements as to: the performance, impact and benefits of the HGX-1 hyperscale GPU accelerator; and NVIDIA joining the Open Compute Project are forward-looking statements that are subject to risks and uncertainties that could cause results to be materially different than expectations.
The theory was solid, but artificial intelligence systems learn from data--lots and lots of data. It's only been in the past few years that we've had the massive volumes of data and cheap computing power we needed to train machines to become intelligent. Google, Facebook, Microsoft--these companies can make powerful neural networks because they can train them with tons and tons of highly accurate data. That's pretty sophisticated stuff, but there are a lot of types of data and applications for machine learning and deep learning that are appropriate to different industries and problems.