Robots in the work place can perform hazardous or even 'impossible' tasks; e.g., toxic waste clean-up, desert and space exploration, and more. AI researchers are also interested in the intelligent processing involved in moving about and manipulating objects in the real world.
We're Cruise, a self-driving service designed for the cities we love. We're building the world's most advanced, self-driving vehicles to safely connect people to the places, things, and experiences they care about. We believe self-driving vehicles will help save lives, reshape cities, give back time in transit, and restore freedom of movement for many. Cruisers have the opportunity to grow and develop while learning from leaders at the forefront of their fields. With a culture of internal mobility, there's an opportunity to thrive in a variety of disciplines.
The World Economic Forum, in collaboration with the World Resources Institute, convenes the Friends of Ocean Action, a coalition of leaders working together to protect the seas. From a programme with the Indonesian government to cut plastic waste entering the sea to a global plan to track illegal fishing, the Friends are pushing for new solutions.
Robotics today is not the same as assembly line Robots of the industrial age because AI is impacting many areas of Robotics. At the AI labs, we have been exploring a few of these areas using the Dobot Magician Robotic Arm in London. Our work was originally inspired by this post from Google which used the Dobot Magician( build your own machine learning powered robot arm using TensorFlow ...). In essence, the demo allows you use voice commands to enable the robotic arm to pick up specific objects (ex a red domino). This demo uses multiple AI technologies.
The idea of creating a machine that can mimic human intelligence is a mainstay in the field of technology. We have already made the jump from "AI" being a movie from the early 2000s to something we take for granted as it sets our alarms for us on our iPhones. However, contrary to what we may believe, AI is still in a nascent space, and there are still some ways to go with regards to robots completely taking over the design industry. While humans spend a lot of time thinking out design solutions through a more hybrid creative/logical thought process, AI is a hyper-logical system of decisions that lead to largely predictable goals. That being said, AI presents a set of possibilities for designers (still human) in making more informed, if not sophisticated decisions.
A new robot known as the Dominator has set a Guinness World Record for placing 100,000 dominos in just over 24 hours. Created by YouTuber and former NASA engineer Mark Rober, the Dominator is the result of more than five years of work. Rober had help from two freshmen from Stanford University and a Bay Area software engineer in creating the googly-eyed robot. The group programmed more than 14,000 lines of code, and outfitted it with components like omnidirectional wheels and 3D-printed funnels to create what Rober says is a "friendly robot that's super good at only one thing: setting up a butt-ton of dominos really, really fast." Up against professional domino artist Lily Hevesh, the Dominator used its ability to lay down 300 tiles all at once to work about 10 times faster than a human. It took the robot about two hours to put down over 9,000 dominos.
Simulation systems have become essential to the development and validation of autonomous driving (AD) technologies. The prevailing state-of-the-art approach for simulation uses game engines or high-fidelity computer graphics (CG) models to create driving scenarios. However, creating CG models and vehicle movements (the assets for simulation) remain manual tasks that can be costly and time consuming. In addition, CG images still lack the richness and authenticity of real-world images, and using CG images for training leads to degraded performance. Here, we present our augmented autonomous driving simulation (AADS). Our formulation augmented real-world pictures with a simulated traffic flow to create photorealistic simulation images and renderings. More specifically, we used LiDAR and cameras to scan street scenes. From the acquired trajectory data, we generated plausible traffic flows for cars and pedestrians and composed them into the background. The composite images could be resynthesized with different viewpoints and sensor models (camera or LiDAR). The resulting images are photorealistic, fully annotated, and ready for training and testing of AD systems from perception to planning. We explain our system design and validate our algorithms with a number of AD tasks from detection to segmentation and predictions. Compared with traditional approaches, our method offers scalability and realism. Scalability is particularly important for AD simulations, and we believe that real-world complexity and diversity cannot be realistically captured in a virtual environment. Our augmented approach combines the flexibility of a virtual environment (e.g., vehicle movements) with the richness of the real world to allow effective simulation.
Profession drone pilots needs to consider that they will be generating huge amounts of data in the form of photos and video. High quality images, along with 4K and even 5.4K video takes up a crazy amount of space, and if you don't plan for it right at the start, you're quickly going to get swamped by it. I've been a pro-am photographer for years and know just how quickly gigabytes can fill up, but even that didn't prepare me for getting into drone photography and videography. Must read: Why you need to urgently update all your iPhones, iPads, and Macs - NOW! There's are two aspects to handling the photos and video once they have been captured onto high-quality microSD cards (I only use SanDisk Pro or Extreme Pro cards from reputable suppliers -- cheap cards can't handle the data speeds needed for 4K and 5.4K, and fake cards are hugely unreliable). The first is ingesting the data off the cards, and the second is storage.
Artificial intelligence (AI) is now on a mission to permeate every industry. From e-commerce and healthcare to travel and finance, AI has made its way to just about every type of industry. In fact, the adoption rate of AI has increased by more than 270%, according to Gartner, Inc. Moreover, 37% of all types of businesses are now using AI-driven technologies such as natural language processing, predictive analysis, machine learning and robotic process automation. Therefore, if you're still not using AI in your business, then it's highly likely your competitors are already doing so, and very soon, you'll be left behind. That's why we have come up with this article that will allow you to build your own AI team to eliminate your existing bottlenecks and achieve your business goals.
Would you like to stay up to date with the latest robotics & AI research from top roboticists? The IEEE/RSJ IROS2020 (International Conference on Intelligent Robots and Systems) recently released their Plenary and Keynote talks in the IEEE RAS YouTube channel. Abstract: Computational modeling of cognitive development has the potential to uncover the underlying mechanism of human intelligence as well as to design intelligent robots. We have been investigating whether a unified theory accounts for cognitive development and what computational framework embodies such a theory. This talk introduces a neuroscientific theory called predictive coding and shows how robots as well as humans acquire cognitive abilities using predictive processing neural networks.
Cassie, the ostrich-inspired bipedal robot has crossed a new milestone by traversing a distance of 5 kilometres in an outdoor environment in under an hour, untethered and on a single battery charge. According to its inventors, including robotics professor Jonathan Hurst from Oregon State University (OSU) in the US, Cassie is the first two-legged robot to use machine learning to control a running gait on outdoor terrain. One of the biggest challenges in designing bipedal robots, the researchers explained, is because running requires dynamic balancing – the ability to maintain balance while switching positions or otherwise being in motion. In the case of Cassie, whose knees bend like an ostrich's, they said the robot taught itself to run using a machine learning algorithm that helped it make infinite subtle adjustments to stay upright while moving. "The Dynamic Robotics Laboratory students in the OSU College of Engineering combined expertise from biomechanics and existing robot control approaches with new machine learning tools," Mr Hurst said in a statement.