If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
Amazon unveiled a long-awaited home robot on Tuesday, and he may or may not be a good boy. Like an extremely advanced puppy, "Astro" is designed to move around the home and assist its owner with small tasks like checking whether the stove is on, playing music, and delivering drinks. The robot can also recognize the faces of certain people and is equipped with a periscope camera that it can raise to get a better view of its surroundings. Amazon says that it will be available sometime later this year on an invite-only basis for $999. Astro is about 20 pounds and two feet tall, about the size of a small dog.
On Tuesday, Amazon announced Astro, a home robot the company says can help owners keep up with tasks such as home monitoring or keeping in touch with family and friends. "One of the things I love about working at Amazon is inventing the future, and I've spent a lot of time since that day on a team that's imagining how robots can help customers in new ways at home," said Charlie Tritschler, vice president of products at Amazon in a blog post. It's available by invite only for $999.99. Astro takes advantage of both Alexa and Ring, its line of home security offerings. The robot can be set to autonomously roam around your home to check for safety.
This is largely because the price tag means it will be a toy for the rich for some time to come. And while Amazon was keen to convince us that for that you get more than "Alexa on wheels", it was hard to see a compelling use case for this other than to curb that ever-present paranoia that you left the oven on.
In 1984, Heathkit presented HERO Jr. as the first robot that could be used in households to perform a variety of tasks, such as guarding people's homes, setting reminders, and even playing games. Following this development, many companies launched affordable "smart robots" that could be used within the household. Some of these technologies, like Google Home, Amazon Echo and Roomba, have become household staples; meanwhile, other products such as Jibo, Aniki, and Kuri failed to successfully launch despite having all the necessary resources. Why were these robots shut down? The simple answer is that most of these personal robots do not work well--but this is not necessarily because we do not have the technological capacity to build highly functional robots.
Now, some new breakthroughs made by Facebook AI researchers suggest that home robots might be able to do the hard work for us, reacting to simple commands such as "bring me my ringing phone." Virtual assistants as we know them are utterly incapable of identifying a specific sound, and then using it as a target for where they should navigate across a space. While you could order a robot to "find my phone 25 feet southwest of you and bring it over", there is little an assistant can do if it's not told exactly where it should go. To address this gap, Facebook's researchers built a new open-source tool called SoundSpaces, designed for so-called "embodied AI" -- a field of artificial intelligence that's interested in fitting physical bodies, like robots, with software, before training the systems in real-life environments. Instead of using static datasets, like most traditional AI methods, embodied AI favours an approach that leverages reinforcement learning, in which robots learn from their interactions with the real, physical world. In this case, SoundSpaces lets developers train virtual embodied AI systems in 3D environments representing indoor spaces, with highly realistic acoustics that can simulate any sound source -- in a two-story house or an office floor, for example.
The promise of conversational AI is that, unlike virtually any other form of technology, all you have to do is talk. Natural language is the most natural and democratic form of communication. After all, humans are born capable of learning how to speak, but some never learn to read or use a graphical user interface. That's why AI researchers from Element AI, Stanford University, and CIFAR recommend academic researchers take steps to create more useful forms of AI that speak with people to get things done, including the elimination of existing benchmarks. "As many current [language user interface] benchmarks suffer from low ecological validity, we recommend researchers not to initiate incremental research projects on them. Benchmark-specific advances are less meaningful when it is unclear if they transfer to real LUI use cases. Instead, we suggest the community to focus on conceptual research ideas that can generalize well beyond the current datasets," the paper reads.
In a study published this week on the preprint server Arxiv.org, Google and University of California, Berkely researchers propose a framework that combines learning-based perception with model-based controls to enable wheeled robots to autonomously navigate around obstacles. They say it generalizes well to avoiding unseen buildings and humans in both simulation and real-world environments and that it leads to better and more data-efficient behaviors than a purely learning-based approach. As the researchers explain, autonomous robot navigation has the potential to enable many critical robot applications, from service robots that deliver food and medicine to logistical and search robots for rescue missions. In these applications, it's imperative for robots to work safely among humans and to adjust their movements based on observed human behavior.
AI models invariably encounter ambiguous situations that they struggle to respond to with instructions alone. That's problematic for autonomous agents tasked with, say, navigating an apartment, because they run the risk of becoming stuck when presented with several paths. To solve this, researchers at Amazon's Alexa AI division developed a framework that endows agents with the ability to ask for help in certain situations. Using what's called a model-confusion-based method, the agents ask questions based on their level of confusion as determined by a predefined confidence threshold, which the researchers claim boosts the agents' success by at least 15%. "Consider the situation in which you want a robot assistant to get your wallet on the bed … with two doors in the scene and an instruction that only tells it to walk through the doorway," wrote the team in a preprint paper describing their work.
Engineers at Tokyo-based company Seven Dreamers started developing a laundry-folding robot called Laundroid in 2005, and now, there is finally a robot to show off at CES 2018. We haven't seen it in person yet, but we spoke with Seven Dreamers CEO Shin Sakane for a preview. The idea is: You drop clean, dry clothes into a box in a pretty home appliance, and then several hours later you can collect the folded, sorted items. "Soft material like clothing is one of the hardest problems for AI even now," Sakane says. "Laundry folding seems like an easy task but it's actually very hard, so that's why no one has ever done it before."