Apple


Making robots see

#artificialintelligence

"There is a fundamental disconnect between what we roboticists say and what the public perceives," says Ian Reid, deputy director of the Australian Centre for Robotic Vision, in Brisbane. And that leads to the heart of the problem, and what researchers mean when they talk about "robotic vision": using cameras to guide robots to carry out tasks in increasingly uncontrolled environments. Is this another of Ian Reid's "disconnects" between the research world and the public's sci-fi driven expectations? "In rich countries like Japan where there are also demographic challenges, you will see a big increase in social robotics – in aged, robotic companions and robotic pets," Mahony predicts.


Machine learning in smartphones

#artificialintelligence

This technology will make your device more energy efficient, enable and improve virtual and augmented reality experiences, provide for smarter camera functionalities, improve device security, and of course, allow for better audio connections. Mobile processors like Qualcomm's Snapdragon 835 leverage machine learning in an effort to extend and expand the boundaries of mobile performance. Yes, artificial intelligence and machine learning can aid and improve all sorts of functions and processes in minutes and specific ways, but as it concerns you, the user, your phone will simply do everything that you need it to – but faster, better, and with greater efficiency. Many devices already feature some form of machine learning (those with the Snapdragon 835 mobile processor, for example, like ODG's R-8 and R-9 smart glasses).


Amazon's Alexa passes 15,000 skills, up from 10,000 in February

#artificialintelligence

Amazon's Alexa voice platform has now passed 15,000 skills -- the voice-powered apps that run on devices like the Echo speaker, Echo Dot, newer Echo Show and others. In the meantime, Amazon's Alexa is surging ahead, building out an entire voice app ecosystem so quickly that it hasn't even been able to implement the usual safeguards -- like a team that closely inspects apps for terms of service violations, for example, or even tools that allow developers to make money from their creations. In the long run, Amazon's focus on growth over app ecosystem infrastructure could catch up with it. In addition, Google Home has just 378 voice apps available as of June 30, Voicebot notes.


Google wants to speed up image recognition in mobile apps

Engadget

Google wants to spread the deep learning to more developers, so it has unveiled a mobile AI vision model called MobileNets. The tech is part of TensorFlow, Google's deep learning model that recently shrunk down to mobile size in a new version called TensorFlow Lite. The larger the model, the better it is at recognizing landmarks, faces or doggos, with the most CPU-intensive ones hitting scores of between 70.7 and 89.5 percent accuracy. Those aren't far from Google's cloud-based AI, which can recognize and caption objects with around 94 percent accuracy, last we checked.


Apple Just Joined Tech's Great Race to Democratize AI

#artificialintelligence

Federighi announced new APIs that help coders building apps for Apple devices do things like recognize faces or animals in photos, or parse the meaning of text. The reasoning goes that if you can make your phones, operating system, or cloud the best place to build smart new software that leverages AI, more users and revenue will follow. For example, Federighi boasted that Apple's new tools help developers run machine learning on data without it having to leave a person's device, giving performance and privacy benefits. A company that needs to run image recognition inside apps on both Apple and Android devices might prefer to use Google's cloud machine learning APIs instead, for example.


Machine Learning Crash Course: Part 1

@machinelearnbot

In other words, by giving our algorithm examples of apples and oranges to learn from, it can generalize its experience to images of apples and oranges that it has never encountered before. This type of machine learning--drawing lines to separate data--is just one subfield of machine learning, called classification. For example, square footage is a good predictor of house prices, so our algorithm should give square footage a lot of consideration by increasing the coefficient associated with square footage. In our example of predicting house prices based on square footage, since we're only considering one variable our model only needs one input feature, or just one x: Now the question becomes: How does a machine learning algorithm choose c2 and c1 so that the line best predicts house prices?


Apple 'Neural Engine' chip could power AI on iPhones

Engadget

But to keep its devices competitive, Apple is building a secondary mobile processor dedicated to powering AI. The tech titan's devices currently split AI tasks between two chips -- the main processor and a GPU -- but this new one, allegedly known internally as the Apple Neural Engine, has its own module dedicated to AI requests. That puts Apple further behind Qualcomm's latest Snapdragon mobile chips, which already have a dedicated AI module, and Google's Tensor Processing Units available in its Cloud Platform to do AI heavy lifting. Unlike the company's differential privacy methods protecting data sent to Apple's servers, the Neural Engine chip would let devices sift through data on their own, which would be faster and easier on the battery, just like the M7 processors did for motion back in 2013.


Mossberg: The Disappearing Computer

#artificialintelligence

The biggest hardware and software arrival since the iPad in 2010 has been Amazon's Echo voice-controlled intelligent speaker, powered by its Alexa software assistant. But just because you're not seeing amazing new consumer tech products on Amazon, in the app stores, or at the Apple Store or Best Buy, that doesn't mean the tech revolution is stuck or stopped. They are: Artificial intelligence / machine learning, augmented reality, virtual reality, robotics and drones, smart homes, self-driving cars, and digital health / wearables. Google has changed its entire corporate mission to be "AI first" and, with Google Home and Google Assistant, to perform tasks via voice commands and eventually hold real, unstructured conversations.


This chart illustrates how AI is exploding at Google

#artificialintelligence

These are some the most elite academic journals in the world. And last year, one tech company, Alphabet's Google, published papers in all of them. The unprecedented run of scientific results by the Mountain View search giant touched on everything from ophthalmology to computer games to neuroscience and climate models. For Google, 2016 was an annus mirabilis during which its researchers cracked the top journals and set records for sheer volume. Behind the surge is Google's growing investment in artificial intelligence, particularly "deep learning," a technique whose ability to make sense of images and other data is enhancing services like search and translation (see "10 Breakthrough Technologies 2013: Deep Learning").


This chart illustrates how AI is exploding at Google

#artificialintelligence

These are some the most elite academic journals in the world. And last year, one tech company, Alphabet's Google, published papers in all of them. The unprecedented run of scientific results by the Mountain View search giant touched on everything from ophthalmology to computer games to neuroscience and climate models. For Google, 2016 was an annus mirabilis during which its researchers cracked the top journals and set records for sheer volume. Behind the surge is Google's growing investment in artificial intelligence, particularly "deep learning," a technique whose ability to make sense of images and other data is enhancing services like search and translation (see "10 Breakthrough Technologies 2013: Deep Learning").