arcore
Enhanced Mobile Experience with AR & AI
In recent years, we have seen a rise in the use of augmented reality (AR) and artificial intelligence (AI). These technologies are changing the way we interact with the world around us. Nowhere is this more apparent than in the mobile experience. AR and AI are used to create more immersive and personal experiences for mobile users. In this blog post, we will explore how these technologies are being used to enhance the mobile experience.
An Empirical Evaluation of Four Off-the-Shelf Proprietary Visual-Inertial Odometry Systems
Kim, Jungha, Song, Minkyeong, Lee, Yeoeun, Jung, Moonkyeong, Kim, Pyojin
HIS article presents a benchmark comparison of off-theshelf proprietary visual-inertial odometry (VIO) systems in six challenging real-world environments, both indoors and used for autonomous navigation of robotic applications, which outdoors. Especially, we select the following four proprietary are the process of determining the position and orientation of VIO systems that are frequently used in autonomous driving a camera-inertial measurement unit (IMU)-rig in 3D space by robotic applications: analyzing the associated camera images and IMU data. As Apple ARKit [4] - Apple's augmented reality (AR) platform, the VIO research has reached a level of maturity, there exist which includes filtering-based VIO algorithms [8] several open published VIO methods such as MSCKF [1], to enable iOS devices to sense how they move in 3D OKVIS [2], VINS-Mono [3], and many commercial products space.
- North America > United States (0.14)
- Asia > South Korea > Seoul > Seoul (0.05)
- Europe > Switzerland > Zürich > Zürich (0.04)
- Transportation > Ground > Road (0.48)
- Information Technology > Robotics & Automation (0.34)
- Information Technology > Communications > Mobile (1.00)
- Information Technology > Artificial Intelligence > Robots > Autonomous Vehicles (0.88)
Machine Learning in ARCore
You can use the camera feed that ARCore captures in a machine learning pipeline with the ML Kit and the Google Cloud Vision API to identify real-world objects, and create an intelligent augmented reality experience. The image at left is taken from the ARCore ML Kit sample, written in Kotlin for Android. This sample app uses a machine learning model to classify objects in the camera's view and attaches a label to the object in the virtual scene. The ML Kit API provides for both Android and iOS development, and the Google Cloud Vision API has both REST and RPC interfaces, so you can achieve the same results as the ARCore ML Kit sample in your own app for Unity (AR Foundation). See Use ARCore as input for Machine Learning models for an overview of the patterns you need to implement.
What to expect at Google's Pixel 2 event
Almost exactly a year ago, Google unveiled a host of new products, a veritable "Made by Google" ecosystem, as the company called it. The most notable devices were the Pixel and Pixel XL smartphones and Google Home smart speaker, but Google also launched the Daydream View VR headset, a mesh-WiFi system and a 4K-capable Chromecast. It was easily the company's biggest push yet into Google-branded hardware. But one year later, the Pixel and Pixel XL have been lapped by new devices from Samsung, Apple and LG, among others. We're due for a refresh, and we'll almost certainly get that in San Francisco on Wednesday, October 4th, when the company hosts its next big product launch. New phones are basically a shoo-in, but there's a bunch of other hardware that Google will likely show off.
- Information Technology > Communications > Mobile (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Personal Assistant Systems (0.38)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (0.38)
- Information Technology > Human Computer Interaction > Interfaces > Virtual Reality (0.35)