Want Disruptive Change? There's An Algorithm For That (Or Soon Will Be)


Trust me – it's not you. Our world really is more unpredictable than ever. Even the best-laid strategies are being disrupted, whether they are focused on the workplace's culture, technical environment, market dynamics, customer behavior, or business processes. But central to these uncertainties is one constant: an algorithm guiding every step along the evolutionary trail to digital transformation. "Each company has a predictable algorithm that's driving its business model," said Sathya Narasimhan, senior director for Partner Business Development at SAP, on a live episode of Coffee Break with Game Changers Radio, presented by SAP and produced and moderated by SAP's Bonnie D. Graham.

Deep learning and artificial intelligence: Making a big deal of big data


Connect with AWS products and services that will help lead your business to 2025AWS DeepLensLooking for a new way to learn machine learning? Let a machine teach you with AWS DeepLens, the world's first deep learning enabled video camera for developers.

Colorising Black & White Photos using Deep Learning


Previous works have used deep learning. They used regression to predict the colour of each pixel. This, however, produces fairly bland and dull results. Previous works used Mean Squared Error (MSE) as the loss function to train the model. The authors noted that MSE will try to'average' out the colors in order to get the least average error, which will result in a bland look.

Music Generation with Azure Machine Learning


This post is authored by Erika Menezes, Software Engineer at Microsoft. Using deep learning to learn feature representations from near-raw input has been shown to outperform traditional task-specific feature engineering in multiple domains in several situations, including in object recognition, speech recognition and text classification. With the recent advancements in neural networks, deep learning has been gaining popularity in computational creativity tasks such as music generation. There has been great progress in this field via projects such as Magenta, an open-source project focused on creating machine learning projects for art and music, from the Google Brain team, and Flow Machines, who have released an entire AI generated pop album. For those of you who are curious about music generation, you can find additional resources here.

2018 Machine Learning Predictions from the Experts Themselves


Our vast experience with planning data science conferences across a multitude of industries has enabled us to host, listen and learn valuable insights into the industry's most ambitious goals and research advancements. As the data science community heads towards 2018, we asked our top speakers to comment on 2017's most impactful achievements in Artificial Intelligence and make a few predictions for 2018. We summarize the most notable insights in this post, and offer expert commentary on the advancements, predictions and lessons learned regarding machine learning algorithms and deep learning systems. Daniel Monistere, SVP-Client Solutions at Nielsen points out the technology advancement electronic devices have met and the increase in their storage and data gathering capabilities. Also, applications have become intelligent being able to collect user data.

This frostbitten black metal album was created by an artificial intelligence


"Coditany of Timeness" is a convincing lo-fi black metal album, complete with atmospheric interludes, tremolo guitar, frantic blast beats and screeching vocals. But the record, which you can listen to on Bandcamp, wasn't created by musicians. Instead, it was generated by two musical technologists using a deep learning software that ingests a musical album, processes it, and spits out an imitation of its style. To create Coditany, the software broke "Diotima," a 2011 album by a New York black metal band called Krallice, into small segments of audio. Then they fed each segment through a neural network -- a type of artificial intelligence modeled loosely on a biological brain -- and asked it to guess what the waveform of the next individual sample of audio would be.

Vivienne Sze receives Engineering Emmy Award

MIT News

Vivienne Sze, an associate professor of electrical engineering and computer science, was a member of the Joint Collaborative Team on Video Coding (JCT-VC), which developed the acclaimed High Efficiency Video Coding (HEVC) standard. For its work, the team received an Engineering Emmy Award during the Television Academy's recent 69th Engineering Emmy Awards ceremony in Hollywood. In a statement about the award, the Television Academy said HEVC "has enabled efficient delivery in ultra-high-definition (UHD) content over multiple distribution channels." "This new compression coding has been adopted, or selected for adoption, by all UHD television distribution channels, including terrestrial, satellite, cable, fiber, and wireless, as well as all UHD viewing devices, including traditional televisions, tablets, and mobile phones," the Academy stated. The JCT-VC's award was one of seven Emmy's given to individuals, companies, or organizations for engineering innovations that significantly improve television transmission, recording, or reception.

Learn how to program for machine learning with Amazon's new Deeplens camera


It may look like a mild-mannered home security camera, but Amazon's AWS DeepLens is anything but. Announced today at the AWS re:Invent 2017 conference, the $249 (£185/AU$330 converted) DeepLens video camera is designed to help train developers in deep learning programming techniques. April 14, 2018, is the projected date of availability on, Deep learning has become a catch-all term for the AI smarts that dominate today's smart home. It's what fuels Amazon's Alexa-enabled speakers, what makes them able to differentiate among various voices, and what makes facial recognition cameras able to distinguish you from your neighbor.

Amazon's AI camera helps developers harness image recognition


Far from the stuff of science fiction, artificial intelligence is becoming just another tool for developers to build the next big thing. It's built in to Photoshop to help you knock out backgrounds, Google is using AI to figure out if you have a person peeping on your phone and Microsoft uses the technology to teach you Chinese. As Amazon's Jeff Barr says, "I think it is safe to say, with the number of practical applications for machine learning, including computer vision and deep learning, that we've turned the corner" towards practical applications for AI. To that end, Amazon has announced AWS DeepLens, a new video camera that runs deep learning models right on the device. The DeepLens has a 4 megapixel camera that can capture 1080P video, along with a 2D microphone array.

Amazon unveils DeepLens, a $249 camera for deep learning


Amazon Web Services today unveiled DeepLens, a wireless video camera made for the quick deployment of deep learning. The camera will cost $249 and is scheduled to ship for customers in the United States in April 2018. DeepLens comes pre-loaded with AWS Greengrass for local computation and can operate with SageMaker, a new service to simplify the deployment of AI models, as well as popular open source AI services such as TensorFlow from Google and Caffe2 from Facebook, according to an AWS blog. "DeepLens runs the model directly onto the device. The video doesn't have to go anywhere.