If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
The code has been copied to your clipboard. One of the most popular tools on Apple's new iPhone X is its facial recognition system. This latest iPhone gives users the power to open the device just by looking at it. The smartphone has performed well in tests set up to trick it into opening for an unapproved user. The same kind of facial recognition system is also used for other purposes.
Just to let you know, if you buy something featured here, Mashable might earn an affiliate commission. Drones have become a serious part of the photography experience, capable of capturing unique photos and videos that no non-flying camera ever could. But as anyone who's ever been attacked by a drone will tell you, piloting them is not easy – regardless of whether you're using a touchscreen or an external controller. Fortunately, there's a new self-flying drone that handles all of this on its own: the Hover 4K Camera Passport Self-Flying Camera Drone. This Red Dot Design award-winning drone uses facial recognition to fly autonomously and capture 360-degree, panoramic 4K photos and videos of you in your environment.
It's fascinating to recall the development of artificial intelligence over the past decade, but the best is yet to come. As we prepare to move into 2018, there are lots of exciting developments coming down the pipeline, especially for AI-enhanced video surveillance. Researchers have clearly documented advances in machine learning and AI over the years. Whether it's IBM's Watson winning a game of Jeopardy against some of the smartest people in the world, a Chinese platform outperforming humans on IQ tests, or Google's AI writing its own poetry, there are dozens of examples of the greatness of AI technology. However, until now, the accomplishments of AI have been more interesting than helpful.
When the cheapest Kindle e-reader drops to $50, it's difficult to resist snagging one on an impulse buy. Appealing to our most reptilian consumer instincts is just how Amazon rolls, and--right on schedule--the retail behemoth-cum-hardware manufacturer is dropping prices across its product lines. Here are all the Black Friday deals for the Amazon Echo, Kindle, and Fire tablet lines. We're also including links to our own coverage of the price-dropped devices (many of which are best in their categories, the Amazon pedigree notwithstanding). Amazon says these deals will end at the stroke of midnight on Monday, Nov. 27.
Apple Inc. has had a "will they or won't they" relationship with self-driving car development over the last few years, and unlike companies like Alphabet Inc. or Uber Technologies Inc., Apple has been fairly tight lipped about most of its work. From what little is known, Apple has been focusing more on the software side rather than on hardware, and a new research paper published Friday by the company on Cornell University's arXiv repository seems to confirm that theory. In the paper, Apple describes a new method for getting more out of a self-driving car's LiDAR sensors using machine learning. LiDAR uses pulses of laser light to create digital maps of objects using point clouds, which are a sort of 3D version of a connect the dots puzzle. Denser point clouds offer a clearer, more accurate picture of an object, but Apple's researchers say their new method, which they call VoxelNet, makes even sparse point clouds useful for object detection.
Apple's work on self-driving cars has been more secretive than just about every other project in the autonomous car space -- but now, two of the company's scientists have published some of their auto-focused research for the first time. The paper, authored by Apple engineers Yin Zhou and Oncel Tuzel and published in the independent journal arXiv, details a new computer imaging software technique called "VoxelNet" that could improve a driverless car system's ability to detect pedestrians and cyclists. The scientists claim their new method could be even more effective than the two-tiered LiDAR and camera systems that have become the industry standard for object detection in self-driving cars. Those expensive systems depend on cameras to help determine the small or faraway objects (like pedestrians or cyclists) detected by LiDAR sensors, which use light beams to detect and map 3D obstacles in the world around the the vehicle. The VoxelNet system -- which was named after the "voxel" unit of value for a point in a three-dimensional grid -- eliminates the need for a camera to help identify the objects detected by LiDAR sensors, allowing the autonomous platform to work on LiDAR alone.
Ever since VR's 2016 revival, it seems we can't get away from talk about "futuristic" technologies like Augmented Reality (AR), Virtual Reality (VR), and Artificial Intelligence (AI). What do these terms really mean and, more importantly, why should advertisers care? Here's a brief overview of these three developing technologies and how they translate to native advertising. The terminology around AR and VR tech is often confused, and not without reason: Both technologies are used to alter a user's perception of reality, and both are commonly used for entertainment or productivity purposes. But there are some important differences that advertisers should be aware of.
Google Lens was announced at the Google I/O 2017 in May and has been slowly gaining steam since then. The app has the ability to identify songs and now can also recognize objects in a smartphone camera's field of vision. Google seems to have taken a page from Samsung's book -- the feature is pretty similar to the company's Bixby Vision, which was launched in August. Both applications use augmented reality algorithms to detect objects in a smartphone camera's range of vision. However, Samsung's execution of Bixby Vision has been flawed at best.
One of Google's most powerful new features is finally rolling out to Pixel phones. Google Lens, the company's intelligent camera software that can analyze the world around you, is now rolling out to Google Assistant on Pixel phones. Announced earlier this year, Lens is one of the company's most important new products as it offers an early look at what the future of search will look like for Google. SEE ALSO: Google Pixel Buds review: They're great... if you own a Pixel phone Though Google Lens has been available within Google Photos since the Pixel 2 launched, this update, which is rolling out "over the coming weeks," marks the first time the feature has been available outside of Google Photos. This means Pixel owners will be able to use Google Lens with their smartphone camera in real-time, rather than simply using the feature to analyze photos they've previously taken.
When he was 8 years old, Matt Reeves started making 8-millimeter movies inspired by his love for the original "Planet of the Apes." "I'd have my friends put on gorilla masks and run around shooting these little sci-fi films," he recalls. "As a kid, I was captivated by these images of horses with apes on them." Decades later, Reeves, perched on a sofa in his tidy Hollywood office, has taken his fascination with primate cinema to a whole new level as the auteur behind the 2014 performance-capture blockbuster "Dawn of the Planet of the Apes" and this summer's "War for the Planet of the Apes." Taking the reins from "Rise of the Planet of the Apes" director Rupert Wyatt, Reeves, lauded for his low-budget horror hit "Cloverfield," initially harbored reservations about helming Twentieth Century Fox's multimillion-dollar franchise.