There's still a long way to go before complex human traits like humor can be properly emulated by artificial intelligence, but Alphabet Inc. is already starting to inject wit into the research effort. The company last week published a machine learning model called "Parsey McParseface" that can automatically map out the linguist structure of any English-language text. The algorithm, which is hailed as the most accurate of its kind yet, was created using a neural networking system that became available on GitHub at the same time. Alphabet hopes that its contribution will ease the development of virtual assistants and other modern applications that deal with a lot of human-generated information. Equally importantly for the search giant, the move will also cement its position in the open-source machine learning community, which has emerged as a key focus area for the web-scale crowd.
When you think of artificial intelligence (AI), do you imagine Will Smith battling humanoid robots? Well, think again…did you know that AI is already being applied in the Internet, helping you go about your daily life without drawing attention to itself? Artificial intelligence simulates traditionally human processes like learning, reasoning and self-correction. Unlike traditional programs, AI-based applications don't need to be continually fed data or manually coded to make changes to their functionality and output. AI can be (and already is) immensely useful to B2B professionals in all industries.
Scientists can now monitor and record the activity of hundreds of neurons concurrently in the brain, and ongoing technology developments promise to increase this number manyfold. However, simply recording the neural activity does not automatically lead to a clearer understanding of how the brain works. In a new review paper published in Nature Neuroscience, Carnegie Mellon University's Byron M. Yu and Columbia University's John P. Cunningham describe the scientific motivations for studying the activity of many neurons together, along with a class of machine learning algorithms--dimensionality reduction--for interpreting the activity. In recent years, dimensionality reduction has provided insight into how the brain distinguishes between different odors, makes decisions in the face of uncertainty and is able to think about moving a limb without actually moving. Yu and Cunningham contend that using dimensionality reduction as a standard analytical method will make it easier to compare activity patterns in healthy and abnormal brains, ultimately leading to improved treatments and interventions for brain injuries and disorders.
Deep learning has been very successful in social sciences and specially areas where there is a lot of data. Trading is another field that can be viewed as social science with a lot of data. With the advent of Deep Learning and Big Data technologies for efficient computation, we are finally able to use the same methods in investment management as we would in face recognition or in making chat-bots. In his session at 20th Cloud Expo, Gaurav Chakravorty, co-founder and Head of Strategy Development at qplum, will discuss the transformational impact of Artificial Intelligence and Deep Learning in making trading a scientific process. This focus on learning a hierarchical set of concepts is truly making investing a scientific process, a utility.