Results


Mobile AI is Huawei's not-so-secret weapon

Engadget

In lieu of a new phone, the company showed off its Kirin 970 chip at IFA 2017, calling attention to the chipset's AI capabilities. The Kirin 970 will power Huawei's next flagship phone, the Mate 10, which is set to launch at a separate October event in Munich. In addition to a slew of high-end features like powerful graphics performance (integrated 12-core GPU), better power management (10nm structure) and improved LTE capability (Cat 18 support), the Kirin 970's standout feature is its embedded neural processing unit (NPU). On the other hand, Huawei has already launched its first AI-powered phone, the Mate 9, last year, which uses machine learning software to control hardware resources.


Huawei's next mobile chipset is ready for our AI-powered future

Engadget

A big part of Huawei's multi-year push to improve its image has been improving the hardware it builds to go inside them, and its latest processor is more than up to the challenge. Earlier this year Huawei introduced "the intelligent phone" with its Mate 9 (pictured above), but the new hardware could help fix some annoying AI-related drawbacks of the device. Native AI processing will enable faster image and voice recognition, as well as "intelligent photography." There, a "Hexagon" DSP built for other types of number crunching works with the rest of the chip to improve AI performance.


ninja-theory-hellblade-motion-capture-demo-video

Engadget

In a makeshift changing room filled with Disney Infinity figures, I strip down to my boxers and pull on a two-part Lycra suit. For years, movie and video game studios have used mocap to bring digital characters to life. A circular, plastic arm wraps around the front of her face, similar to orthodontic headgear, with an LED light strip and cameras fitted on the inside. The cinematics were crafted with motion capture technology developed by Weta Digital, a visual effects company in New Zealand co-owned by Peter Jackson.


Intel puts Movidius AI tech on a $79 USB stick

Engadget

Last year, Movidius announced its Fathom Neural Compute Stick -- a USB thumb drive that makes its image-based deep learning capabilities super accessible. However, Intel has announced that the deep neural network processing stick is now available and going by its new name, the Movidius Neural Compute Stick. "Designed for product developers, researchers and makers, the Movidius Neural Compute Stick aims to reduce barriers to developing, tuning and deploying AI applications by delivering dedicated high-performance deep-neural network processing in a small form factor," said Intel in a statement. The Compute Stick contains a Myriad 2 Vision Processing Unit that uses only around one watt of power.


Microsoft's machine learning can predict injuries in sports

Engadget

Today, the company introduced its new Sports Performance Platform, an analytics system that aims to help teams track, improve and predict their players performance using machine learning and Surface technology. Microsoft's Sports Performance Platform can, for example, figure out when a player is at risk of injury, based on his or her most recent performance and recovery time. The company says one of the main benefits to its sports analytics tool is that it's powered by proprietary business tools such as Power BI, a cloud-based intelligence suite also used on products like Excel, as well as Azure and, of course, Surface computers. Professional teams such as the Seattle Reign FC (US, National Women's Soccer League) and Real Sociedad (Spain, La Liga) are already taking advantage of the Sports Performance Platform.


Google's neural network is a multi-tasking pro

Engadget

Trying to train a neural network to do an additional task usually makes it much worse at its first. The company's multi-tasking machine learning system called MultiModal was able to learn how to detect objects in images, provide captions, recognize speech, translate between four pairs of languages as well as parse grammar and syntax. In a blog post the company said, "It is not only possible to achieve good performance while training jointly on multiple tasks, but on tasks with limited quantities of data, the performance actually improves. To our surprise, this happens even if the tasks come from different domains that would appear to have little in common, e.g., an image recognition task can improve performance on a language task."


Bixby's voice features aren't finished, but US users can test them

Engadget

While the company still hasn't locked down when Bixby's voice search and control features will go live, it just confirmed that brave users in the US can enroll in an "early preview test" to get a taste of what's coming down the pipeline. If you happen to make the cut, Samsung will collect information about Bixby's performance on your device, and may ask you for direct feedback. Samsung originally said that Bixby's voice features would launch sometime in Spring 2017, which is basically already over -- considering the amount of time it'll take to collect Bixby feedback and performance data from all these tester devices, it seems likely that the wait for a more complete Samsung assistant will be even longer than we expected. In the days leading up the the S8/S8 Plus launch, company spokespeople said the goal was to build a voice interface that could effectively control those phones as effectively as as one could by using the touchscreen.


Lyft and nuTonomy aim to improve self-driving car comfort

Engadget

Ride service companies like Uber and Lyft are focused on the technology of self-driving cars, but what about everythingn else? Lyft and nuTonomy will be doing R&D in the Boston area at the Raymond L. Flynn marine park and nearby at Seaport and Fort Point. During trials, "an engineer from nuTonomy rides in each of its vehicles during testing to observe system performance and assume control if needed," the company said. Following initial trials, Lyft and nuTonomy could expand to gather even more data and learn "about the ideal function, performance and features of an autonomous mobiliy-on-demand service," they say.


ARM's new mobile processors are built for AI on the go

Engadget

When ARM showed up at Computex last year, it brought a bundle of smartphone processors that pushed for better mobile VR. First up is the Cortex-A75 CPU core, which the company says can deliver laptop-level performance without burning through any more power than existing mobile processors. ARM is promising a 50 percent boost in performance compared to the older A73 core, which should lend itself well to machine learning processes that run right on your devices. Not only is it more power efficient than the G71; ARM says the GPU is 17 percent more efficient at machine learning processes than the processor it replaces.


Disney's projection tech turns actors' faces into nightmare fuel

Engadget

It's using a new projection system to transform the appearance of actors during live performances, tracking facial expressions and "painting" them with light, rather than physical makeup. Called Makeup Lamps, the system was developed by a team at Disney Research, and it could potentially change the way stage makeup is used in future theater productions. Makeup Lamps tracks an actor's movements without using the facial markers common in motion capture, then it displays any color or texture the actor wants by adjusting the lighting. "We've seen astounding advances in recent years in capturing facial performances of actors and transferring those expressions to virtual characters," said Markus Gross, vice president at Disney Research.