multiple angle
The ability not to be replaced by artificial intelligence-what is empathy?
Daniel Pinker believes that we are about to move from the information age to the concept age. In the conceptual age, humans' pursuit of emotion and experience is constantly escalating. People with more creativity and empathy will take the lead, and these people will be people with right brain thinking. The left brain is responsible for logic and analysis, and is a common linear thinking. The right brain is responsible for emotions and feelings and is a comprehensive thinking.
Google uses crowdsourced photos to recreate landmarks in 3D for AR/VR
Historically, human artists have been challenged to recreate real-world locations as 3D models, particularly when applications call for photorealistic accuracy. But Google researchers have come up with an alternative that could simultaneously automate the 3D modeling process and improve its results, using a neural network with crowdsourced photos of a location to convincingly replicate landmarks and lighting in 3D. The idea behind neural radiance fields (NeRF) is to extract 3D depth data from 2D images by determining where light rays terminate, a sophisticated technique that alone can create plausible textured 3D models of landmarks. Google's NeRF in the Wild (NeRF-W) system goes further in several ways. First, it uses "in-the-wild photo collections" as inputs, expanding a computer's ability to see landmarks from multiple angles. Next, it evaluates the images to find structures, separating out photographic and environmental variations such as image exposure, scene lighting, post-processing, and weather conditions, as well as shot-to-shot object differences such as people who might be in one image but not another.
- Information Technology > Artificial Intelligence > Vision (0.73)
- Information Technology > Communications > Social Media > Crowdsourcing (0.62)
How to Label Data -- Create ML for Object Detection
The new Create ML app just announced at WWDC 2019, is an incredibly easy way to train your own personalized machine learning models. All that's required is dragging a folder containing your training data into the tool and Create ML does the rest of the heavy lifting. So how do we prepare our data? When doing image or sound classification we just need to organize the data into folders, but if we want to do object detection the task becomes a bit more complicated. With object detection, we need to specify some additional information.
Nvidia Taught an AI to Instantly Generate Fully-Textured 3D Models From Flat 2D Images
Turning a sketch or photo of an object into a fully realized 3D model so that it can be duplicated using a 3D printer, played in a video game, or brought to life in a movie through visual effects, requires the skills of a digital modeler working from a stack of images. But Nvidia has successfully trained a neural network to generate fully-textured 3D models based on just a single photo. We've seen similar approaches to automatically generating 3D models before, but they've either required a series of photos snapped from many different angles for accurate results or input from a human user to help the software figure out the dimensions and shape of a specific object in an image. Neither are wrong approaches to the problem; any improvements made to the task of 3D modeling are welcome as they make such tools available to a wider audience, even those lacking advanced skills. But they also limit the potential uses for such software. At the annual Conference on Neural Information Processing Systems which is taking place in Vancouver, British Columbia, this week, researchers from Nvidia will be presenting a new paper--"Learning to Predict 3D Objects with an Interpolation-Based Renderer"--that details the creation of a new graphics tool called a differentiable interpolation-based renderer, or DIB-R, for short, which sounds only slightly less intimidating.