Goto

Collaborating Authors

 arcore


Enhanced Mobile Experience with AR & AI

#artificialintelligence

In recent years, we have seen a rise in the use of augmented reality (AR) and artificial intelligence (AI). These technologies are changing the way we interact with the world around us. Nowhere is this more apparent than in the mobile experience. AR and AI are used to create more immersive and personal experiences for mobile users. In this blog post, we will explore how these technologies are being used to enhance the mobile experience.


An Empirical Evaluation of Four Off-the-Shelf Proprietary Visual-Inertial Odometry Systems

Kim, Jungha, Song, Minkyeong, Lee, Yeoeun, Jung, Moonkyeong, Kim, Pyojin

arXiv.org Artificial Intelligence

HIS article presents a benchmark comparison of off-theshelf proprietary visual-inertial odometry (VIO) systems in six challenging real-world environments, both indoors and used for autonomous navigation of robotic applications, which outdoors. Especially, we select the following four proprietary are the process of determining the position and orientation of VIO systems that are frequently used in autonomous driving a camera-inertial measurement unit (IMU)-rig in 3D space by robotic applications: analyzing the associated camera images and IMU data. As Apple ARKit [4] - Apple's augmented reality (AR) platform, the VIO research has reached a level of maturity, there exist which includes filtering-based VIO algorithms [8] several open published VIO methods such as MSCKF [1], to enable iOS devices to sense how they move in 3D OKVIS [2], VINS-Mono [3], and many commercial products space.


Machine Learning in ARCore

#artificialintelligence

You can use the camera feed that ARCore captures in a machine learning pipeline with the ML Kit and the Google Cloud Vision API to identify real-world objects, and create an intelligent augmented reality experience. The image at left is taken from the ARCore ML Kit sample, written in Kotlin for Android. This sample app uses a machine learning model to classify objects in the camera's view and attaches a label to the object in the virtual scene. The ML Kit API provides for both Android and iOS development, and the Google Cloud Vision API has both REST and RPC interfaces, so you can achieve the same results as the ARCore ML Kit sample in your own app for Unity (AR Foundation). See Use ARCore as input for Machine Learning models for an overview of the patterns you need to implement.


What to expect at Google's Pixel 2 event

Engadget

Almost exactly a year ago, Google unveiled a host of new products, a veritable "Made by Google" ecosystem, as the company called it. The most notable devices were the Pixel and Pixel XL smartphones and Google Home smart speaker, but Google also launched the Daydream View VR headset, a mesh-WiFi system and a 4K-capable Chromecast. It was easily the company's biggest push yet into Google-branded hardware. But one year later, the Pixel and Pixel XL have been lapped by new devices from Samsung, Apple and LG, among others. We're due for a refresh, and we'll almost certainly get that in San Francisco on Wednesday, October 4th, when the company hosts its next big product launch. New phones are basically a shoo-in, but there's a bunch of other hardware that Google will likely show off.