If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
This model learns low dimensional vectors to represent vertices appearing in a graph and, unlike existing work, integrates global structural information of the graph into the learning process. We also formally analyze the connections between our work and several previous research efforts, including the DeepWalk model of Perozzi et al. as well as the skip-gram model with negative sampling of Mikolov et al. We conduct experiments on a language network, a social network as well as a citation network and show that our learned global representations can be effectively used as features in tasks such as clustering, classification and visualization. Empirical results demonstrate that our representation significantly outperforms other state-of-the-art methods in such tasks.
The deployment of machine learning models is the process for making your models available in production environments, where they can provide predictions to other software systems. It is only once models are deployed to production that they start adding value, making deployment a crucial step. However, there is complexity in the deployment of machine learning models. This post aims to at the very least make you aware of where this complexity comes from, and I'm also hoping it will provide you with useful tools and heuristics to combat this complexity. If it's code, step-by-step tutorials and example projects you are looking for, you might be interested in the Udemy Course "Deployment of Machine Learning Models".
T-Mobile prides itself on being a disruptor in the world of wireless communications, always thinking creatively about the relationship it wants to have with its consumers. That includes the company's approach to using AI for customer service. Using the predictive capabilities of machine learning to improve customer service is a great example of AI augmenting human abilities. T-Mobile sees it as an opportunity to serve customers better and faster, benefiting not just the company and its service agents but also enriching the customer experience and creating stronger human-to-human connections. "Most industries have looked to use AI and machine learning to build more sophisticated Interactive Voice Response (IVR) systems and chatbots as a means to deflect for as long as possible the interaction between a human customer service agent and the customer," says Cody Sanford, executive vice president and chief information officer at T-Mobile.
"The Personalization team makes deciding what to play next on Spotify easier and more enjoyable for every listener. We seek to understand the world of music and podcasts better than anyone else so that we can make great recommendations to every individual person and keep the world listening. Everyday, hundreds of millions of people all over the world use the products we build which include destinations like "Home" and "Search" as well as original playlists such as "Discover Weekly" and "Daily Mix."
There have been huge advancements in recent years in the area of AI "deepfakes", or fake photos or videos of humans created using neural networks. Fake videos of a person usually require a large number of photos of that individual, but Samsung has figured out how to create realistic talking heads from as little as a single portrait photo. In a newly published paper titled, "Few-Shot Adversarial Learning of Realistic Neural Talking Head Models," a team of researchers at the Samsung AI Center in Moscow, Russia, share their new system that has this "few-shot capability." Once it's familiar with human faces, it's able to create talking heads of previously unseen people using one or a few shots of that person. For each photo, the AI is able to detect various "landmarks" on the face -- things like the eyes, nose, mouth, and various lengths and shapes.
In September 1955, John McCarthy, a young assistant professor of mathematics at Dartmouth College, boldly proposed that "every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it." McCarthy called this new field of study "artificial intelligence," and suggested that a two-month effort by a group of 10 scientists could make significant advances in developing machines that could "use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves." At the time, scientists optimistically believed we would soon have thinking machines doing any work a human could do. Now, more than six decades later, advances in computer science and robotics have helped us automate many of the tasks that previously required the physical and cognitive labor of humans. But true artificial intelligence, as McCarthy conceived it, continues to elude us.
Ever wonder how neuroscientists are able to monitor and study what happens inside a living brain in action? One of the challenges in neuroscience is observing the activity of neurons intact in brain tissue that is taking place in a living organism--in vivo. One approach, two-photon calcium imaging, is a method developed circa 1990. In mammalian neurons, calcium is an intracellular messenger. This imaging approach involves the loading of calcium ions (Ca2) indicator dyes in the desired brain region for neuronal monitoring and a two-photon laser scanning microscope for visualization.