Synthetic visual data can provide practicically infinite diversity and rich labels, while avoiding ethical issues with privacy and bias. However, for many tasks, current models trained on synthetic data generalize poorly to real data. The task of 3D human pose estimation is a particularly interesting example of this sim2real problem, because learning-based approaches perform reasonably well given real training data, yet labeled 3D poses are extremely difficult to obtain in the wild, limiting scalability. In this paper, we show that standard neural-network approaches, which perform poorly when trained on synthetic RGB images, can perform well when the data is pre-processed to extract cues about the person's motion, notably as optical flow and the motion of 2D keypoints. Therefore, our results suggest that motion can be a simple way to bridge a sim2real gap when video is available.
Existing state-of-the-art estimation systems can detect 2d poses of multiple people in images quite reliably. In contrast, 3d pose estimation from a single image is ill-posed due to occlusion and depth ambiguities. Assuming access to multiple cameras, or given an active system able to position itself to observe the scene from multiple viewpoints, reconstructing 3d pose from 2d measurements becomes well-posed within the framework of standard multi-view geometry. Less clear is what is an informative set of viewpoints for accurate 3d reconstruction, particularly in complex scenes, where people are occluded by others or by scene objects. In order to address the view selection problem in a principled way, we here introduce ACTOR, an active triangulation agent for 3d human pose reconstruction.
Sophia, the world's most advanced human-like robot participated in the Timberlane Middle School science fair and Family Fun Night to promote STEM education last Saturday. It was Sophia's first time attending a school science fair as a guest and her creator, Hanson Robotics, expressed gratitude to Hopewell Valley for the kind invitation and for showing innovative thinking by including Sophia in this year's activities. Sophia was created by combining innovations in science, engineering, and artistry. She is a framework for robotics and artificial intelligence ("AI") and research, and an agent for exploring the human-to-robot experience in service and entertainment applications. Sophia has also become a much sought-after media personality, helping to advocate for AI research and the role of robotics and AI in our lives.
Artificial intelligence researchers at IBM have introduced a major upgrade to the famed Watson computer, allowing it to understand idioms and colloquialisms for the first time. IBM says the update makes it the first commercial AI system capable of identifying, understanding and analysing some of the most challenging aspects of the English language. Phrases like "hardly helpful" and "hot under the collar" are tricky for algorithms to spot, meaning AI is unable to debate complex topics or have nuanced conversations with humans. "Language is a tool for expressing thought and opinion, as much as it is a tool for information," said Rob Thomas, a general manager at IBM Data and AI. "This is why we believe that advancing our ability to capture, analyse, and understand more from language with NLP will help transform how businesses utilise their intellectual capital that is codified in data."
Hardly a week goes by without a report announcing the end of work as we know it. In 2013, Oxford University academics Carl Frey and Michael Osborne were the first to capture this anxiety in a paper titled: "The Future of Employment: How susceptible are jobs to computerisation?". They concluded 47% of US jobs were threatened by automation. Since then, Frey has taken multiple opportunities to repeat his predictions of major labour market disruptions due to automation. In the face of threats to employment, some progressive thinkers advocate jettisoning our work ethic and building a world without work.
So I've reading a lot about AI and the responsibility it involves. We hear a lot about the potential for AI to help people and society, but also a lot of potential for harm. I think it's because we're starting to see some of the ramifications of these systems that we're putting out in the real world. Hollywood movies and science fiction novels show AI as human-like robots that take over the world, the current evolution of AI technologies isn't that scary – or quite that smart. Instead, AI has evolved to provide many specific benefits in every industry.
As artificial intelligence continues to mature, we are seeing a corresponding growth in sophistication for humanoid robots and the applications for digital human beings in many aspects of modern-day life. To help you see the possibilities, we have pulled together some of the best examples of humanoid robots and where you might see digital humans in your everyday life today. Even though the earliest form of humanoid was created by Leonardo Da Vinci in 1495 (a mechanical armored suit that could sit, stand and walk), today's humanoid robots are powered by artificial intelligence and can listen, talk, move and respond. They use sensors and actuators (motors that control movement) and have features that are modeled after human parts. Whether they are structurally similar to a male (called an Android) or a female (Gynoid), it's a challenge to create realistic robots that replicate human capabilities.
VARANASI: The residents of Varanasi will have a brush with the amazing creation of science as they get a chance to meet the humanoid robot, Sophia, at IIT-BHU's annual techno-management fest, Technex, being organised from February 14 to 16. Dean of students Prof B N Rai told reporters on Wednesday, "A special guest talk by the only humanoid robot, Sophia, would be a big draw at the fest this year." Interestingly, Sophia, the first robot capable of expressing human-like emotions that distinguishes her from any other humanoid robots, was activated on February 14, 2016 and will hence be celebrating her fourth birthday at Technex in a way. Sophia, developed by Hong Kong-based Hanson Robotics, became the first robot to receive citizenship of any country when became a Saudi Arabian citizen in October 2017. It can display more than 60 facial expressions.
When the Indian Space Research Organisation (ISRO) sends its first astronaut into space, it won't have to worry about building her a spacesuit. Vyommitra is a half-humanoid robot that ISRO plans to send to space this December during a bid to successfully land an unmanned spacecraft on the moon. In September, the space agency tried--and failed--to touch down on the lunar surface when its Vikram lander experienced a braking problem. If Vikram had landed safely, India would have been the fourth country to land on the moon, following Russia, the U.S., and China. This time around, as part of India's next space mission, Vyommitra will sit in the Gaganyaan spacecraft, which is equipped to fit up to three humans.
Neon is built upon Core R3 platform. In general, a machine learning algorithm needs lots of training data to recognizing an object or to perform any other intelligent task. The core R3 platform is pre-trained on human behaviours such as look, gesture, movements of human, etc. As a result, Core R3 is a powerful engine that helps to customize Neon according to a particular human with very less interaction and very less amount of training. It can also create original looking Neon out of that.