If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
Loquacious intelligent assistants have become a standard fixture of consumer devices, such as cell phones and smartwatches. These are harbingers of the accelerating osmosis of AI into everyday life. While charming, current implementations are pale imitations of what's coming. With most of the intelligence happening on cloud server farms, today's products are more like a ventriloquist's dummy, parroting responses from the real brains behind the curtain; smart, but limited. The emergence of Face ID, Apple's wondrous new biometric authentication system that uses facial recognition backed by an array of sensors and a new AI-accelerated iPhone SoC, marks the beginning of the second stage of embedded AI in which more of the intelligence happens on the device, independent of the cloud.
We talk about artificial intelligence (AI), robots, and machine learning as if they're coming soon, or are just some tech pipe dream. That's not a century from now; it's not even a decade. It's just three short years away. That can either terrify you if you've seen too many sci-fi films, or excite you if you consider the upside and benefits it could yield. The reality probably lies somewhere in the middle.
Deep learning is fueling breakthroughs in everything from consumer mobile apps to image recognition. Yet running deep learning-based AI models poses many challenges. One of the most difficult roadblocks is the time it takes to train the models. The need to crunch lots of data and the computational complexity of building deep learning-based AI models also slows down the progress in accuracy and the practicality of deploying deep learning at scale. It's the training times -- often measured in days, sometimes weeks -- that slow down implementation.
Ahead of Amazon's big AWS division Re:invent conference next week, the company has announced two developments in the area of artificial intelligence. AWS is opening a machine learning lab, ML Solutions Lab, to pair Amazon machine learning experts with customers looking to build solutions using the AI tech. And it's releasing new features within Amazon Rekognition, Amazon's deep learning-based image recognition platform: real-time face recognition and the ability to recognize text in images. The new lab and the enhancements to its image recognition platform underscore the push that Amazon and AWS are giving to AI at the company, both internally and as a potential area to grow its B2B business in this area. They come about a month after AWS announced it would be collaborating with Microsoft on Gluon, a deep learning interface designed for developers to build and run machine learning models for their apps and other services.
Imagine if something not designed with you or anyone like you in mind was the driving force of how regular interactions permeate your life. Imagine it controls what products are marketed to you, how you can use certain consumer products (or not), influences your interactions with law enforcement, and even determines your health care diagnoses and medical decisions. There are problems brewing at the core of artificial intelligence and machine learning (ML). AI algorithms are essentially opinions embedded in code. AI can create, formalize, or exacerbate biases by not including diverse perspectives during ideation, testing, and implementation.
Doctors work long hours, and a disturbingly large part of that is documenting patient visits -- one study indicates that they spend 6 hours of an 11-hour day making sure their records are up to snuff. But how do you streamline that work without hiring an army of note takers? Google Brain and Stanford think voice recognition is the answer. They recently partnered on a study that used automatic speech recognition (similar to what you'd find in Google Assistant or Google Translate) to transcribe both doctors and patients during a session. The approach can not only distinguish the voices in the room, but also the subjects.
Deep learning has been widely successful in solving complex tasks such as image recognition (ImageNet), speech recognition, machine translation, etc. In the area of personalized recommender systems, deep learning has started showing promising advances in recent years. The key to success of deep learning in personalized recommender systems is its ability to learn distributed representations of users' and items' attributes in low dimensional dense vector space and combine these to recommend relevant items to users. To address scalability, the implementation of a recommendation system at web scale often leverages components from information retrieval systems, such as inverted indexes where a query is constructed from a user's attribute and context, learning to rank techniques. Additionally, it relies on machine learning models to predict the relevance of items, such as collaborative filtering.
The words "artificial intelligence" often conjure up a sense of fear and apprehension. Fear for the unknown possibilities of AI, fear for the AI-fueled dystopian images brought about by movies like The Terminator, and most practically, fear for the possibility that AI will someday take our jobs. This fear is neither new nor totally unfounded. As with any disruptive technological invention, faster, more efficient machines are bound to replace human workers. However, those who fear AI will take their jobs can rest a little easier knowing they will at least have the potential to find a new job.
It's a Saturday morning in June at the Royal Society in London. Computer scientists, public figures and reporters have gathered to witness or take part in a decades-old challenge. Some of the participants are flesh and blood; others are silicon and binary. Thirty human judges sit down at computer terminals, and begin chatting. To determine whether they're talking to a computer program or a real person.
Vinci 2.0 is a standalone computing device with a Quad-Core ARM Cortex A-7 processor and WiFi, 3G cellular（SIM card built-in), and Bluetooth connectivity. You can ask Vinci to make a call, send a text message, set a reminder, or give you directions. No phone is required so you can carry less and workout more. Vinci 2.0 can receive push notifications directly from your phone no matter how far away you are from it. Whether you are lifting weights, jogging, or cycling, just ask Vinci for your favorite songs, request songs by specific genres or moods, or let Vinci recommend a song for you, 20 languages supported.