If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
Last week, Qualcomm announced the Snapdragon 845, which sends AI tasks to the most suitable cores. There's not a lot of difference between the three company's approaches -- it ultimately boils down to the level of access each company offers to developers, and how much power each setup consumes. Before we get into that though, let's figure out if an AI chip is really all that much different from existing CPUs. A term you'll hear a lot in the industry with reference to AI lately is "heterogeneous computing." It refers to systems that use multiple types of processors, each with specialized functions, to gain performance or save energy.
If you make a purchase by clicking one of our links, we may earn a small share of the revenue. However, our picks and opinions are independent from USA TODAY's newsroom and any business incentives. The holidays are underway, and Christmas is inching closer and closer. If you've still got shopping to do, Free Shipping Day is the perfect time to get a lot done. But there's not a lot that's actually on sale, at least not a lot of gift-worthy stuff.
Tech's biggest players have fully embraced the AI revolution. Apple, Qualcomm and Huawei have made mobile chipsets that are designed to better tackle machine learning tasks, each with a slightly different approach. Huawei launched its Kirin 970 at IFA this year, calling it the first chipset with a dedicated neural processing unit (NPU). Then, Apple unveiled the A11 Bionic chip, which powers the iPhone 8, 8 Plus and X. The A11 Bionic features a neural engine that the company says is "purpose-built for machine learning," among other things.
An engineer who's learning about using convolutional neural networks for image classification just asked me an interesting question; how does a model know how to recognize objects in different positions in an image? Since this actually requires quite a lot of explanation, I decided to write up my notes here in case they help some other people too. Here's two example images showing the problem that my friend was referring to: If you're trying to recognize all images with the sun shape in them, how do you make sure that the model works even if the sun can be at any position in the image? It's an interesting problem because there are really three stages of enlightenment in how you perceive it: My friend is at the third stage of enlightenment, but is smart enough to realize that there are few accessible explanations of why CNNs cope so well. I don't claim to have any novel insights myself, but over the last few years of working with image models I have picked up some ideas from experience, and heard folklore passed down through the academic family tree, so I want to share what I know.
Samsung isn't going to let Apple hog the smart speaker limelight early next year. A new report reveals the South Korea giant is preparing to introduce its Bixby-powered speaker in the first half of 2018, so it is now expected to go head-to-head with the Cupertino giant's HomePod. On Thursday, Bloomberg learned from people familiar with Samsung's plans that the South Korea tech company is planning to launch its Bixby-powered smart speaker in the first half of the following year. While this may seem like Samsung's entry to the voice-controlled devices market is coming in late, its release would just be enough for it to rival Apple's HomePod, which is already scheduled for launch in the first quarter. Since the Bixby-powered speaker will compete with not only the HomePod but also established smart speakers like Amazon's Echo series and Google's Google Home, Samsung is setting its new product apart from its rivals by promising a device that's strongly focused on audio quality and efficient management of connected home appliances.
Google is rolling out its personal AI Google Assistant to users with phones running older versions of its Android OS, finally giving more people the voice-controlled smart features that are becoming a hallmark of the platform. The company announced the AI will soon be available to users running the 5.0 Lollipop version of Android OS. Tablet owners will get in on the action if they're on 7.0 Nougat or 6.0 Marshmallow. Lollipop phone users have already begun to see the update in the U.S. and other countries if English or Spanish is their default language, and the feature is also rolling out in Italy, Japan, Germany, Brazil, and Korea. Eligible English language tablet users in the U.S. will begin receiving the update "over the coming week."
Tokyo: Jeffrey (Jeff) Dean is a Google senior fellow in a research group at Google where he leads the company's artificial intelligence (AI) project called Google Brain. Along with his team, Dean, who joined Google in 1999, is currently implementing the company's vision as articulated by chief executive Sundar Pichai--to build an "AI-first" world. In an interview on the sidelines of a "Google #MadewithAI" event, held recently in Tokyo, Dean explains what this vision encompasses and the challenges involved in implementing it. What are the major steps involved in this process of implementing the Google strategy of building an AI-first world? The steps involve making products that are useful, help others innovate and solve humanity's big challenges.
Google may be the household name when it comes to search, but Microsoft is hoping it can make its Bing search engine the smartest. The Redmond, Wash.-based company has announced a handful of new features that it says are powered by artificial intelligence. The updates will start rolling out on Wednesday and will continue over the coming week. The biggest changes enable Bing to be smarter about the information it chooses to display above search results in response to a query. The search engine will now be able to pull information from multiple sources, rather than just one.
Google on Wednesday announced Google Assistant is now available for Android tablets running Android 7.0 Nougat and 6.0 Marshmallow. Assistant was previously only available on some Android phones, Android Wear, Google Home, and an iOS app. Google also said its virtual personal assistant is expanding to phones running Android 5.0 Lollipop. Google Assistant enables users to search the internet, schedule events and alarms, adjust hardware settings, show information from their Google account, and more, with their voice. Android tablet users with their language set to English in the US will see Google Assistant hit their tablet in the coming weeks, Google said.
Google Assistant has been available on recent Android phones for a while. However, that still puts it out out of reach of many Android users when a whopping 46.5 percent of active Android users are running a version older than Marshmallow. To help address this, Google is making Assistant available on devices running Android Lollipop. If you're still rocking an older phone, you'll get the same AI helper as a shiny new handset. The update is starting to reach phones with English language settings in the US, UK, Australia, Canada, India and Singapore.