At the keynote address of this year's I/O developer conference, Google's CEO announced that the company will be selling AI computer chips, called Cloud Tensor Processing Units (TPUs), via Google Cloud service. Bloomberg noted that Google created the chip to address issues around the high cost and high demand on computing power machine learning took up in the company's data centers. The news of the chip comes along with the announcement of machine learning innovations across Google's products, reportedly including a new photo editing tool, features for Google Assistant and a new web portal for the company's AI plays. Buyers will need to sign up for a Google cloud service, run their tasks and store their data on Google equipment, noted Bloomberg, in order to get the Cloud TPU chip.
However, since it has become apparent that a huge amount of value can be locked away in this unstructured data, great efforts have been made to create applications that are capable of understanding unstructured data--for example, visual recognition and natural language processing. Recently, there has been a big push for the development of systems that are capable of processing and offering insights in real time (or near-real time), and advances in computing power, as well as development of techniques such as machine learning, have made it a reality in many applications. Spark is another open-source framework like Hadoop (discussed in my Part 1 post), but more recently developed and more suited to handling cutting-edge Big Data tasks involving real time analytics and machine learning. A subfield of reporting (see above), visualizing is now often an automated process, with visualizations that are customized by algorithm to be understandable to the people who need to act or take decisions based on them.
With innovative and accessible machine learning services and APIs, from image recognition to chatbots, machine learning is growing increasingly important for both the core functionality of apps, and the features that make it stand out from the crowd. Through the application of machine learning, apps are making their mark on the world. New York City is harnessing the power of what was recently coined the Internet of Recognition, to improve traffic flow through the use of cameras, image recognition functionality, data streaming, and business intelligence. In this case, the chatbots now understand language, not just commands, and continues to grow smarter with each conversation.
In a previous post, we described the details of NSynth (Neural Audio Synthesis), a new approach to audio synthesis using neural networks. As a quick reminder, the NSynth algorithm works by finding a compressed representation of sound (let's call it "z"). Similar to how a good instrument controls a large range of sound with a small number of intuitive parameters, compressed representations are meaningful if they produce a large range and are manipulable. While the model was trained on single instrument sounds, it can be applied to any sound of any length because it uses temporal embeddings that scale with the length of the sound.
Gartner predicts that by 2019, deep learning -- AI's self-improving grandchild -- will provide best-in-class performance for demand, fraud and failure prediction. Instead of feeling deflated by these siloed business intelligence tools, self-service analytics will enable more business users to get marketing information from their data, exactly when they need it. This move will allow enterprise to focus on their buyer in real time and deliver on their business' value. In his role as the Practice Director, Mo is focused on building the Artificial Intelligence & Deep Learning consulting practice via mentoring and advising clients and providing guidance on ongoing Deep Learning projects.
The biggest issue facing artificial intelligence right now is the question of'Why did the AI make a decision?' The problem we have now in research and academia is the lack of collaborative research concerning AI from multiple fields--science, engineering, medical, arts. We have a hard enough time telling people why the AI made a certain decision. Actually, what drives reverse engineering of the brain and the personalization of AI is not research in academia, it's more the lawyers coming in and asking'Why is the AI making these decisions?'
In a series of experiments using teams of human players and robotic AI players, the inclusion of "bots" boosted the performance of human groups and the individual players, researchers found. The study adds to a growing body of Yale research into the complex dynamics of human social networks and how those networks influence everything from economic inequality to group violence. In this case, Christakis and first author Hirokazu Shirado conducted an experiment involving an online game that required groups of people to coordinate their actions for a collective goal. People whose performance improved when working with the bots subsequently influenced other human players to raise their game.
The brain network identified in macaques is located in the same areas of the brain associated with theory of mind in humans, suggesting the human brain circuitry may have evolved from macaques. Scientists call our ability to understand another person's thoughts -to intuit their desires, read their intentions, and predict their behavior - theory of mind. The macaques' mirror neuron regions were also active when the animals watched videos of other monkeys interacting socially, and even when they watched objects colliding with other objects. Scientists call our ability to understand another person's thoughts -to intuit their desires, read their intentions, and predict their behavior - theory of mind They identified a part of the network that responded exclusively to social interactions, remaining nearly inactive in their absence.