Media


From 2D to 3D Photo Editing – imgly

#artificialintelligence

Last November, we released Portrait, an iOS app that helps create amazing, stylized selfies and portraits instantly. With over a million downloads and many more portrait images created, we had a nice confirmation of our vision. The central component of Portrait is an AI that is trained to clip portraits from the background, a technique we are eager to further improve and refine. In fact, Portrait helped us to explore a novel technique for image editing, as we were able to leverage a new powerful data set in photography: depth data. We began feeding our AI models with the depth data from the iPhone Xs TrueDepth camera and had one goal in mind: to infer depth information for portrait imagery, or bringing three-dimensionality into a two-dimensional photo.


AWS takes DeepLens, a machine learning camera, GA ZDNet

#artificialintelligence

Amazon's DeepLens, a deep learning enabled video camera, is now generally available and hitting the market for $249. AWS DeepLens is designed to run models via TensorFlow and Caffe in less then 10 minute startup time for developers. The overall effort is to put more machine learning tool into the field and with developers. As for the hardware, DeepLens is a 4 megapixel camera with 1080P video, 2D microphone array, Intel Atom processor and 8GB of memory for models and code. The device runs Ubuntu 16.04, AWS Greengrass Core and optimized versions of MXNet and Intel clDNN libraries.


The State Of The ARt At AWE 18

Forbes Technology

The 9th annual Augmented World Expo at the Santa Clara Convention Center, May 29th to June 1st, 2018, was a celebration of AR's progress. Watershed events, like the introduction of ARKit from Apple in September 2017, have spurred innovation. Mobile AR is very hot. Most of the glasses look dorky, though some are slimming down. The dorky ones were by far the most popular. The bigger story, however, is how fast the enterprise segment is growing as applications as straightforward as schematics on a head-mounted monocular microdisplay are transforming manufacturing, assembly, and warehousing. Tom Emrich, Programmer of AWE and a partner in Super Ventures, delivered his dramatic keynote AWE using motion capture technology. For AWE's co-founder and Executive Producer, Ori Inbar, the Conference was nothing less than a victory lap. With Microsoft and Qualcomm among the Gold Sponsors, there was a palpable smell of vindication in the air.



J.J. Abrams' Bad Robot expands into gaming with China's Tencent

Engadget

J.J. Abrams' Bad Robot Productions, the company behind blockbuster films and TV shows like Star Trek, Mission Impossible: Ghost Protocol, Lost and Westworld, is making a jump into gaming. It's joining forces with Chinese gaming giant Tencent, and minority partner Warner Bros. to launch Bad Robot Games. "I'm a massive games fan, and increasingly envious of the amazing tools developers get to work with, and the worlds they get to play in," said Abrams in a statement. The company will team with traditional game developers on both AAA and indie games for PC, mobile and console. Abrams said the company will take a "unique co-development approach," bringing its story-telling and visual chops to projects.


Intel shows how Movidius AI chips and Windows ML will let PCs anticipate your needs

PCWorld

Intel envisions a future where your PC will simply anticipate your habits and act accordingly. But it's not clear when that future will arrive, how realistic that vision will be, or whether consumers will tolerate a computer that predicts your every move. What we know is this: Intel's building a future version of its tiny desktop PCs, the NUCs, with Amazon's Alexa assistant built in. The Intel "Bean Canyon" NUC--Bean for "coffee bean," or the "Coffee Lake" chip built inside of it--will arrive later this year. Meanwhile, Intel is adapting its Movidius chips into "AI chips" that will power these intelligent, future experiences.


Apple needs to play nice with Spotify

#artificialintelligence

With WWDC a couple days out, we're coming up on one year since Apple first showed off its glitzy answer to the Amazon Echo and Google Home smart speakers. It took over 8 months from then for the HomePod to finally hit shelves and it took up until a couple of days ago for all the promised functionality to arrive. Four months since launch, it's clear Apple delivered some awesome hardware, but there are plenty of features I want to see the HomePod pick up when Apple comes to the stage at its annual developer conference to talk iOS 12. For all the criticisms levied against the device, the most weighty has been the fact that there isn't even a vague reason to consider buying the speaker unless you are an Apple Music subscriber. For Apple Watch LTE users who want to listen to non-Apple Music tunes the same is true to a lesser degree.


Mapbox to Bring AI-Powered Vision SDK to Microsoft Azure IoT Platform

#artificialintelligence

The Mapbox Vision SDK provides augmented reality (AR) navigation, driver alerts for speed limits, pedestrians, vehicles and other event-based triggers for responsive apps. The integration with Azure IoT Hub will provide developers with a holistic solution that aggregates cloud data using artificial intelligence (AI) and machine-learning technologies to send reports. Mapbox plans to integrate the open-sourced Azure IoT Edge runtime, which provides custom logic, management and communications functions for edge devices. Events detected from the Vision SDK integrated with Azure IoT Edge will enable developers to build responsive applications that both provide immediate feedback to the driver as well as stream semantic event data into Microsoft Cognitive Services for additional analysis. Reports include sending collision incidents to an insurance platform, providing information about heavy traffic or blocked roadway alerts to a dispatch network, or activity about a crossing intersection to a business intelligence platform analyzing route paths.


100 years of motion-capture technology

Engadget

Modern motion-capture systems are the product of a century of tinkering, innovation and computational advances. Mocap was born a lifetime before Gollum hit the big screen in The Lord of the Rings, and ages before the Cold War, Vietnam War or World War II. It was 1915, in the midst of the First World War, when animator Max Fleischer developed a technique called rotoscoping and laid the foundation for today's cutting-edge mocap technology. Rotoscoping was a primitive and time-consuming process, but it was a necessary starting point for the industry. In the rotoscope method, animators stood at a glass-topped desk and traced over a projected live-action film frame-by-frame, copying actors' or animals' actions directly onto a hand-drawn world.


From 'pretty please' mode to Digital Wellbeing, Google unveils tech with a responsible message

Washington Post

Google's annual developer conference is normally a relentlessly positive cheerleading session to excite developers to create products for the company and its Android operating system. But this year, there was a hint of a more serious tone as the company discussed creating technology that is not simply innovative, but responsible. The theme of the company's annual conference was "Make Good Things Together." Google chief executive Sundar Pichai said in a keynote address to about 7,000 developers and journalists that Google wants to push ahead to innovate, but he acknowledged that the tech giant can't be "wide-eyed" about it. "There are important questions being raised about the impact of these advances and the role they'll play in our lives," he said.