If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
"There is a fundamental disconnect between what we roboticists say and what the public perceives," says Ian Reid, deputy director of the Australian Centre for Robotic Vision, in Brisbane. And that leads to the heart of the problem, and what researchers mean when they talk about "robotic vision": using cameras to guide robots to carry out tasks in increasingly uncontrolled environments. Is this another of Ian Reid's "disconnects" between the research world and the public's sci-fi driven expectations? "In rich countries like Japan where there are also demographic challenges, you will see a big increase in social robotics – in aged, robotic companions and robotic pets," Mahony predicts.
This technology will make your device more energy efficient, enable and improve virtual and augmented reality experiences, provide for smarter camera functionalities, improve device security, and of course, allow for better audio connections. Mobile processors like Qualcomm's Snapdragon 835 leverage machine learning in an effort to extend and expand the boundaries of mobile performance. Yes, artificial intelligence and machine learning can aid and improve all sorts of functions and processes in minutes and specific ways, but as it concerns you, the user, your phone will simply do everything that you need it to – but faster, better, and with greater efficiency. Many devices already feature some form of machine learning (those with the Snapdragon 835 mobile processor, for example, like ODG's R-8 and R-9 smart glasses).
Amazon's Alexa voice platform has now passed 15,000 skills -- the voice-powered apps that run on devices like the Echo speaker, Echo Dot, newer Echo Show and others. In the meantime, Amazon's Alexa is surging ahead, building out an entire voice app ecosystem so quickly that it hasn't even been able to implement the usual safeguards -- like a team that closely inspects apps for terms of service violations, for example, or even tools that allow developers to make money from their creations. In the long run, Amazon's focus on growth over app ecosystem infrastructure could catch up with it. In addition, Google Home has just 378 voice apps available as of June 30, Voicebot notes.
Google wants to spread the deep learning to more developers, so it has unveiled a mobile AI vision model called MobileNets. The tech is part of TensorFlow, Google's deep learning model that recently shrunk down to mobile size in a new version called TensorFlow Lite. The larger the model, the better it is at recognizing landmarks, faces or doggos, with the most CPU-intensive ones hitting scores of between 70.7 and 89.5 percent accuracy. Those aren't far from Google's cloud-based AI, which can recognize and caption objects with around 94 percent accuracy, last we checked.
Federighi announced new APIs that help coders building apps for Apple devices do things like recognize faces or animals in photos, or parse the meaning of text. The reasoning goes that if you can make your phones, operating system, or cloud the best place to build smart new software that leverages AI, more users and revenue will follow. For example, Federighi boasted that Apple's new tools help developers run machine learning on data without it having to leave a person's device, giving performance and privacy benefits. A company that needs to run image recognition inside apps on both Apple and Android devices might prefer to use Google's cloud machine learning APIs instead, for example.
In other words, by giving our algorithm examples of apples and oranges to learn from, it can generalize its experience to images of apples and oranges that it has never encountered before. This type of machine learning--drawing lines to separate data--is just one subfield of machine learning, called classification. For example, square footage is a good predictor of house prices, so our algorithm should give square footage a lot of consideration by increasing the coefficient associated with square footage. In our example of predicting house prices based on square footage, since we're only considering one variable our model only needs one input feature, or just one x: Now the question becomes: How does a machine learning algorithm choose c2 and c1 so that the line best predicts house prices?
But to keep its devices competitive, Apple is building a secondary mobile processor dedicated to powering AI. The tech titan's devices currently split AI tasks between two chips -- the main processor and a GPU -- but this new one, allegedly known internally as the Apple Neural Engine, has its own module dedicated to AI requests. That puts Apple further behind Qualcomm's latest Snapdragon mobile chips, which already have a dedicated AI module, and Google's Tensor Processing Units available in its Cloud Platform to do AI heavy lifting. Unlike the company's differential privacy methods protecting data sent to Apple's servers, the Neural Engine chip would let devices sift through data on their own, which would be faster and easier on the battery, just like the M7 processors did for motion back in 2013.
The biggest hardware and software arrival since the iPad in 2010 has been Amazon's Echo voice-controlled intelligent speaker, powered by its Alexa software assistant. But just because you're not seeing amazing new consumer tech products on Amazon, in the app stores, or at the Apple Store or Best Buy, that doesn't mean the tech revolution is stuck or stopped. They are: Artificial intelligence / machine learning, augmented reality, virtual reality, robotics and drones, smart homes, self-driving cars, and digital health / wearables. Google has changed its entire corporate mission to be "AI first" and, with Google Home and Google Assistant, to perform tasks via voice commands and eventually hold real, unstructured conversations.
Apple's secretive self-driving car project just got a little less mysterious. The Cupertino-based company was granted a permit by the California DMV to test cars on public roads in April, but the details on just what exactly it had planned were few and far between. We knew the permit applied to three self-driving Lexus RX540h SUVs, but not much more. Thanks to documents obtained by Business Insider, we now have a preliminary -- emphasis on preliminary -- look at how Apple intends to challenge Uber and Google in the race for self-driving dominance. The documents refer to something called the "Apple Automated System," and note that Apple uses both a Logitech wheel and pedals to operate the remote driving system.
I am sure by now, you have heard the phrase that has been thrown around quite a lot by mostly, venture capitalists: "Artificial Intelligence (AI) is the new mobile." The reason why this phrase has been echoed in the tech industry is to emphasize that AI is not a short-lived fad, rather a revolution like mobile. More importantly, they seem to be right as in the last five years, giant tech companies have been pouring money into this technology. In fact, over 200 private companies using AI algorithms across different verticals have been acquired since 2012, with over 30 acquisitions taking place in Q1'17 alone. The acquisitions of AI startups are getting feisty, too.