The next time you sit down to watch a movie, the algorithm behind your streaming service might recommend a blockbuster that was written by AI, performed by robots, and animated and rendered by a deep learning algorithm. An AI algorithm may have even read the script and suggested the studio buy the rights. It's easy to think that technology like algorithms and robots will make the film industry go the way of the factory worker and the customer service rep, and argue that artistic filmmaking is in its death throes. For the film industry, the same narrative doesn't apply -- artificial intelligence seems to have enhanced Hollywood's creativity, not squelched it. It's true that some jobs and tasks are being rendered obsolete now that computers can do them better.
Remember the movie "The Imitation Game"? The tragic story of a brilliant man who decrypted secret German Enigma messages, indirectly shortened World War II, saved millions of lives, and was later charged for homosexuality, forced to undergo chemical treatment, and ended his life shortly after? The real Alan Turing accomplished many more brilliant miracles than this. He also published papers on theories of artificial intelligence (AI). In fact, the title "The Imitation Game" had little to do with the movie. It was a game he mentioned in one of his papers where humans will one day engineer a machine to imitate humans so well that a human on the other side of the room will be fooled he was communicating with another human. Turing was a pioneer in the field of computer science. Only after his death would he be known as the father of AI.
Today, Artificial Intelligence (AI) and Machine Learning (ML) are two popular terms that tech companies cannot stop talking about. Everyone from Google and Microsoft, to Apple, Samsung and Amazon are going big on AI. Besides smartphones, smart speakers, voice assistants, apps, connected cars, security surveillance, healthcare and customer support are other areas where AI is used. Machine Learning (and deep learning) has been going on for years, and now with the data that exists, tech companies are putting it to the best use. On device machine learning combined with artificial intelligence can help in anticipating things in advance.
While modern digital cameras have made significant strides in shooting cleaner images at high ISOs, many photographers still do battle with image noise on a regular basis. Chip maker NVIDIA has just revealed a new technique, based on deep learning, that can quickly dispatch image noise. As NVIDIA explains, typical deep learning approaches have required training a neural network to recognize when a clean end state image should look like based on a series of noisy images. Armed with this information, the network can then take a fresh, noisy image and remove the noise. But NVIDIA's new technique works without needing to be feed samples of noisy images.
What if you could take your photos that were originally taken in low light and automatically remove the noise and artifacts? Have grainy or pixelated images in your photo library and want to fix them? This deep learning-based approach has learned to fix photos by simply looking at examples of corrupted photos only. The work was developed by researchers from NVIDIA, Aalto University, and MIT, and is being presented at the International Conference on Machine Learning in Stockholm, Sweden this week. Recent deep learning work in the field has focused on training a neural network to restore images by showing example pairs of noisy and clean images.
There is a new and apparently better way to fix grainy photos, and it uses AI. Artificial Intelligence, one must say, coupled with lots of computing power, but that's growing day after day. Artificial Intelligence and machine learning will never cease to surprise us, apparently. Each new day, there's some new announcement, covering different fields: AI can render 3D hair in real time, smell illnesses in human breath, assess infrastructure quality in Africa or help transform audio into music playing avatars. Now it can also help photographers get rid of noise in their photos.
One of the core features of any computer systems used in movies or cop dramas is the ability to enhance and add information into a degraded image, and thanks to work by Nvidia, MIT, and Helsinki's Aalto University, reality and fiction are a step closer to meeting. Detailed in a paper [PDF], the system dubbed Noise2Noise could be used for cleaning up low-light and astronomical photography, magnetic resonance imaging, and removing text from photos without needing to sight the original, the researchers said. "It is possible to learn to restore signals without ever observing clean ones, at performance sometimes exceeding training using clean exemplars," the paper said. "[The neural network] is on par with state-of-the-art methods that make use of clean examples -- using precisely the same training methodology, and often without appreciable drawbacks in training time or performance. "Of course, there is no free lunch -- we cannot learn to pick up features that are not there in the input data -- but this applies equally to training with clean targets."
It feels like artificial intelligence crept into our lives almost without us knowing, helping us pick movies on Netflix, our favourite tunes on Spotify and buy things on Amazon. As it gets older and smarter, AI's reach will be staggering, with experts at the 2018 Davos World Economic Forum predicting there's a 50-per-cent chance artificial intelligence will outperform humans in all tasks in 45 years. Consider the ways it's already at work in our lives. There is face recognition to unlock our phones; fraud detection on credit cards; smart homes that call Uber, dim lights and lower the heat; fridges that give us recipes when we pull something out for dinner, and stoves that begin to preheat (because they talk to the fridge). All possible because AI – or "deep learning" technology – sorts and identifies huge swaths of data and connects the dots (or thinks) for us.
Amateur and professional musicians alike may spend hours pouring over YouTube clips to figure out exactly how to play certain parts of their favorite songs. But what if there were a way to play a video and isolate the only instrument you wanted to hear? That's the outcome of a new AI project out of MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL): a deep-learning system that can look at a video of a musical performance, and isolate the sounds of specific instruments and make them louder or softer. The system, which is "self-supervised," doesn't require any human annotations on what the instruments are or what they sound like. Trained on over 60 hours of videos, the "PixelPlayer" system can view a never-before-seen musical performance, identify specific instruments at pixel level, and extract the sounds that are associated with those instruments.