A photograph of the sky by Trevor Paglen can look like a massive abstraction, except for a tiny speck, a surveillance drone, spotted like a malignant dot on a chest x-ray. His images of secluded military sites in Nevada can also ooze with colour from the churning heat and dust. In the new documentary film Unseen Skies, directed by Yaara Bou Melhem, Paglen calls the effect "impressionistic haze". Photographing those places, often from miles away (or farther), is about "seeing and not seeing at the same time," Paglen says. "For me those images were about capturing that paradox."
DJI has a new drone, the Air 2S, and it's one of the best drones I've ever flown. The Air 2S is externally nearly identical to last year's Mavic Air 2. It even uses the same batteries, which makes upgrading a little cheaper. There are some very welcome changes in this update. The Air 2S adds an object detection camera to the top of the drone, which improves the collision avoidance system. It really helps when you're flying toward something at high speed, since the drone pitches forward, rendering the front sensor slightly less effective.
When your 87-second short film prompts the director of Toy Story 3 to tweet praise calling it "one of the most amazing things I've ever seen," you know you've done something right. Filmed at Bryant Lake Bowl and Theatre in Minneapolis with one continuous drone shot, Jay Christensen's Right Up Our Alley is a stunning short film that's essentially a high-speed tour of a regular night at a bowling alley. At the time of writing, it's clocked up over 6.1 million views on Twitter and 660,000 views on YouTube and caught the attention of Hollywood star Elijah Wood and Guardians of the Galaxy director James Gunn, who both had similarly enthusiastic things to say about it. In terms of how it was made, Christensen confirmed on Instagram that the sound was added separately later. It's worth noting that Christensen has previous form when it comes to single-shot drone films -- you can watch his previous shorts, including one filmed at a movie theatre and one that follows a motorbike rider through an empty mall, on his YouTube channel.
Aerial vehicles are revolutionizing the way film-makers can capture shots of actors by composing novel aerial and dynamic viewpoints. However, despite great advancements in autonomous flight technology, generating expressive camera behaviors is still a challenge and requires non-technical users to edit a large number of unintuitive control parameters. In this work we develop a data-driven framework that enables editing of these complex camera positioning parameters in a semantic space (e.g. calm, enjoyable, establishing). First, we generate a database of video clips with a diverse range of shots in a photo-realistic simulator, and use hundreds of participants in a crowd-sourcing framework to obtain scores for a set of semantic descriptors for each clip. Next, we analyze correlations between descriptors and build a semantic control space based on cinematography guidelines and human perception studies. Finally, we learn a generative model that can map a set of desired semantic video descriptors into low-level camera trajectory parameters. We evaluate our system by demonstrating that our model successfully generates shots that are rated by participants as having the expected degrees of expression for each descriptor. We also show that our models generalize to different scenes in both simulation and real-world experiments. Supplementary video: https://youtu.be/6WX2yEUE9_k
This robot is not exactly in the cloud: Wall-E from the movie by Pixar. Bringing a new robot to market is exciting: new capability, new hardware, new services. The problem is when you get to software, where everything feels harder and takes longer than you think it should. Like Tesla's full self-driving, which has all the hardware and intelligence it needs -- with the possible exception of LIDAR -- but is perpetually just ... about ... to ... arrive ... and even so, was recently savaged by Consumer Reports as buggy and ineffective. Hardware is necessary, but software provides the animating intelligence that allows it to do useful, efficient, and safe work.
Amazon has announced a full range of new spherical Echo devices, new motorised smart display, a camera drone that flies around your house, a game-streaming service and more. In a streaming presentation, the firm showed off a smorgasbord of new devices from its various brands, including Ring, Eero Fire and Echo. The new standard Echo ditches its cylindrical shape for a fabric-covered ball design with Amazon's characteristic light-ring in the base to indicate when it is listening to you. It has a new 3in woofer and two tweeters with Dolby processing for stereo sound and automatic adjustment to the acoustics of your room. It also has Amazon's new AZ1 artificial intelligence chip for greater local processing of voice and other actions for increased privacy and speed.
Recent successes combine reinforcement learning algorithms and deep neural networks, despite reinforcement learning not being widely applied to robotics and real world scenarios. This can be attributed to the fact that current state-of-the-art, end-to-end reinforcement learning approaches still require thousands or millions of data samples to converge to a satisfactory policy and are subject to catastrophic failures during training. Conversely, in real world scenarios and after just a few data samples, humans are able to either provide demonstrations of the task, intervene to prevent catastrophic actions, or simply evaluate if the policy is performing correctly. This research investigates how to integrate these human interaction modalities to the reinforcement learning loop, increasing sample efficiency and enabling real-time reinforcement learning in robotics and real world scenarios. This novel theoretical foundation is called Cycle-of-Learning, a reference to how different human interaction modalities, namely, task demonstration, intervention, and evaluation, are cycled and combined to reinforcement learning algorithms. Results presented in this work show that the reward signal that is learned based upon human interaction accelerates the rate of learning of reinforcement learning algorithms and that learning from a combination of human demonstrations and interventions is faster and more sample efficient when compared to traditional supervised learning algorithms. Finally, Cycle-of-Learning develops an effective transition between policies learned using human demonstrations and interventions to reinforcement learning. The theoretical foundation developed by this research opens new research paths to human-agent teaming scenarios where autonomous agents are able to learn from human teammates and adapt to mission performance metrics in real-time and in real world scenarios.
It's very rare to see any travel video or brochure these days that doesn't have an image shot from overhead, on a drone. Life seems more dramatic up from above, right? The units themselves have gotten way easier to use, more affordable and the camera quality is pretty amazing. Imagine being able to throw a high-quality camera in the air that can get stunning overhead shots, smooth video even in wind and always somehow return home to sender. It's one of the major tech advancements of our time, but it can be a little confusing.
Meet AguaDrone – the world's only waterproof drone platform built for marine research, aquaculture, sport, and commercial fishing. Throughout history, humans have been using various fishing methods. Over the years, the tools that we use became pretty sophisticated. It was only a matter of time before someone would invent a quadcopter drone designed specifically for fishermen. It's created by the company AguaDrone and it is the first drone that can autonomously fish in deep waters.
Sci-fi movies have created an impact on our minds that using robots in our life is a very bad idea. From The Terminator to The Matrix, almost every Hollywood movie shows that robots took control over humanity. Even RUR, the 1920s Karel Capek play introduced the term "robot,". Despite the cinematic warnings robots have moved from fiction stories to an important piece of modern world arsenal. Now the developed world is also debating on the point to use develop killer robots and machine to save human life.