If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
Amazon's Alexa voice platform has now passed 15,000 skills -- the voice-powered apps that run on devices like the Echo speaker, Echo Dot, newer Echo Show and others. In the meantime, Amazon's Alexa is surging ahead, building out an entire voice app ecosystem so quickly that it hasn't even been able to implement the usual safeguards -- like a team that closely inspects apps for terms of service violations, for example, or even tools that allow developers to make money from their creations. In the long run, Amazon's focus on growth over app ecosystem infrastructure could catch up with it. In addition, Google Home has just 378 voice apps available as of June 30, Voicebot notes.
SUNNYVALE, Calif., April 18, 2017 –Socionext Inc. and SOINN Inc. today announced initial results of collaboration started in 2016, in which Socionext extracts and delivers biometrics data to the "Artificial Brain SOINN". The companies achieved initial results in reading ultrasound images from Socionext's viewphii mobile ultrasound solution by Artificial Brain SOINN. The results will be introduced at Medtec Japan, held in Tokyo Big Sight, April 19-21, at booths 4505 & 4507. In this initial trial, SOINN learned to read subcutaneous fat thickness from abdominal ultrasound images. The estimations by SOINN were then compared with the reading results by ultrasound technicians.
Once the domain of science-fiction authors and script writers, artificial intelligence is steadily marching into the real world. Recently we've seen the technology do everything from reading lips better than the experts to playing in a poker tournament and trouncing its human competition. But while it seems everyone is jumping on the AI bandwagon (or should that be futuristic car?), we were wondering just how advanced the technology really is. So we put a question to Igal Raichelgauz, the founder of Cortica, an image-recognition company that relies on AI technology to thrive: Why is artificial intelligence still kind of dumb? Join more than 500 New Atlas Plus subscribers who read our newsletter and website without ads.
Shakey the Robot, the world's first mobile, intelligent robot, developed at SRI International between 1966-1972, was the first robot to be honored with a prestigious IEEE Milestone in Electrical Engineering and Computing. The IEEE Milestone program honors significant inventions, locations or events related to electrical engineering and computing that have benefitted humanity, and which are at least 25 years old. "Shakey was groundbreaking in its ability to perceive, reason about and act in its surroundings," said Bill Mark, Ph.D., president of SRI's Information and Computing Sciences Division. "We are thrilled that Shakey has received this prestigious recognition from the IEEE as it is a testament to its profound influence on modern robotics and AI techniques even to this day." The original Shakey robot is on display at the Computer History Museum where it is the centerpiece of the Artificial Intelligence portion of its "Revolution: The First 2000 Years of Computing" exhibition.
Microsoft, Google, Facebook, and pretty much every other tech company are investing billions into artificial intelligence, making great strides in building smarter software and hardware. But in a Reddit AMA (ask me anything) session, Microsoft cofounder and richest man in the world Bill Gates replied to a question about the advancement he most wants to see in his lifetime by indicating that AI can go further still. "The big milestone is when computers can read and understand information like humans do. There is a lot of work going on in this field - Google, Microsoft, Facebook, academia," writes Gates. "Right now computers don't know how to represent knowledge so they can't read a text book [sic] and pass a test."
Reaching a new milestone in its quest to achieve speed and scalability in indexing and exploiting vast amounts of video, IDENTV announced today the beta release of Neural Upscaling, its proprietary solution for dramatically enhancing image quality, allowing for significantly greater accuracy in object and facial recognition. "This capability automates the process of enhancing quality of a degraded image and amplifies the original with 400% more detail thus enabling rapid detection and matching of objects and faces that was previously not possible at speed and scale," explained Mohamad Shihadah, founder and CEO of IDENTV. Neural Upscaling is an integral part of IDENTV's pioneering Intelligent Video-fingerprinting Platform (IVP), a technology that combines artificial intelligence, machine learning, and computer vision in a highly integrated fashion to deliver high-speed visual content recognition and indexing. Neural Upscaling has broad applications across commercial and national security applications and solves a critical limitation on video, faces, or objects that are degraded. IDENTV was recently selected as one of the Top 5 Artificial Intelligence companies in the DC area.
Cray Inc. (Nasdaq: CRAY) announced the results of a deep learning collaboration between Cray, Microsoft, and the Swiss National Supercomputing Centre (CSCS) that expands the horizons of running deep learning algorithms at scale using the power of Cray supercomputers. Running larger deep learning models is a path to new scientific possibilities, but conventional systems and architectures limit the problems that can be addressed, as models take too long to train. Cray worked with Microsoft and CSCS, a world-class scientific computing center, to leverage their decades of high performance computing expertise to profoundly scale the Microsoft Cognitive Toolkit (formerly CNTK) on a Cray XC50 supercomputer at CSCS nicknamed "Piz Daint". By accelerating the training process, instead of waiting weeks or months for results, data scientists can obtain results within hours or even minutes. With the introduction of supercomputing architectures and technologies to deep learning frameworks, customers now have the ability to solve a whole new class of problems, such as moving from image recognition to video recognition, and from simple speech recognition to natural language processing with context.
AI was a major theme in 2016, with tech giants like Google, Microsoft, Apple, and Amazon all touting their machine-learning chops and virtual-assistant skills. But nothing underscored the coming AI invasion like DeepMind's AlphaGo, which has become so skilled at the strategy board game Go that it trounced world champion Lee Se-dol four games to one. Researchers have long viewed Go--with 361 potential moves on the first turn alone--as the ultimate AI challenge, yet Lee said AlphaGo's decisive victory left him feeling "powerless." It's a win for technology, but for humanity maybe not so much.
You probably didn't expect many surprises in Rogue One, the first Star Wars "side story" which details how, exactly, the Rebel Alliance acquired the plans for the Death Star. Indeed, the entire film seems to exist just to fill in a bit of background detail for A New Hope, our first Luke Skywalker adventure. But it turns out Rogue One is much more than an elaborate bit of fan service. It's surprisingly harrowing, genuinely moving and it'll likely go down as a milestone for digital-actor resurrection. Rogue One brought Peter Cushing, the legendary British actor who played Grand Moff Tarkin in the original Star Wars and passed away in 1994, back from the dead this past weekend as a CG character.
A new artificial intelligence tool created by Google and Oxford University researchers could significantly improve the success of lip-reading and understanding for the hearing impaired. In a recently released paper on the work, the pair explained how the Google DeepMind-powered system was able to correctly interpret more words than a trained human expert. The tool is called Watch, Listen, Attend and Spell (WLAS), and the paper describes it as a "network that learns to transcribe videos of mouth motion to characters." Using videos from the BBC, the team trained the system with a dataset of more than 100,000 natural sentences. While similar attempts in the past have focused on a narrow set of words, the report said, Google and Oxford wanted to address lip reading through "unconstrained natural language sentences, and in the wild videos."