"Much of our work in robotics is focused on self-supervised learning, in which systems learn directly from raw data so they can adapt to new tasks and new circumstances," a team of researchers from FAIR (Facebook AI Research) wrote in a blog post. "In robotics, we're advancing techniques such as model-based reinforcement learning (RL) to enable robots to teach themselves through trial and error using direct input from sensors." Specifically, the team has been trying to get a six-legged robot to teach itself to walk without any outside assistance. "Generally speaking, locomotion is a very difficult task in robotics and this is what it makes it very exciting from our perspective," Roberto Calandra, a FAIR researcher, told Engadget. "We have been able to design algorithms for AI and actually test them on a really challenging problem that we otherwise don't know how to solve."
We're big fans of keeping track of what is going on in the developer community. So, what does the technical world look like today? And more importantly, where is it going? SlashData's Developer Economics global survey reached more than 21,000 developers from around the world and focused on four major themes: AI, serverless, augmented and virtual reality, and programming languages. According to their research, Machine learning and AI are poised to fuel a new wave of innovation.
It's difficult to renovate a bathroom. There are a thousand things to do, all of which a typical customer has never done before. This problem is compounded by the imagination gap. When a customer views a product, it can be difficult for them to picture how that product will look in their bathroom. Is there a good place for that product?
Google AI yesterday released its latest research result in speech-to-speech translation, the futuristic-sounding "Translatotron." Billed as the world's first end-to-end speech-to-speech translation model, Translatotron promises the potential for real-time cross-linguistic conversations with low latency and high accuracy. Humans have always dreamed of a voice-based device that could enable them to simply leap over language barriers. While advances in deep learning have contributed to highly improved accuracy in speech recognition and machine translation, smooth conversations between different language speakers remained hampered by unnatural pauses during machine processing. Google's wireless headphone Pixel Bud released in 2017 boasted real-time speech translation, but users found the practical experience less then satisfying.
Facebook Inc.'s chief artificial intelligence scientist said the company is years away from being able to use software to automatically screen live video for extreme violence. Yann LeCun's comments follow the March livestream of the Christchurch mosque shootings in New Zealand. 'This problem is very far from being solved,' LeCun said Friday during a talk at Facebook's AI Research Lab in Paris. Facebook was criticised for allowing the Christchurch attacker to broadcast the shootings live without adequate oversight that could have resulted in quicker take-downs of the video. It also struggled to prevent other users from re-posting the attacker's footage.
Amazon's cloud platform - Amazon Web Services or AWS - has been operational for the past 13 years and over this time, has over 165 features as offeringss for computer storage, database, networking, analytics, robotics, machine learning (ML), Artificial Intelligence (AI), Internet of Things (IoT), mobile, security, hybrid, virtual and augmented reality (VR and AR), media, and application development, deployment, and management. Also read: Tech for good: Here's why AWS' Andy Jassy is betting big on AI and Blockchain YourStory: How are new technologies at AWS helping clients' businesses? Olivier Klein: When we talk about technologies, it is not a specific list, it is about technologies that help redefine customer experiences or just improve overall operational efficiency. A big chunk of customer experience goes into data analytics - artificial intelligence (AI) and machine learning (ML) space. This in turn is used in, for example, understanding voice or speech better.
Voice commerce is transforming the way travellers search, browse and buy online. Travel brands have been focusing on the utility of voice features/ assistants, keenly evaluating those aspects of a trip that are tedious, and how can voice make the experience better. "We have witnessed great advancement in the manner in which one can communicate with voice assistants, their context being understood and being helped out (in various tasks)," said Rodrigo Sánchez Prandi, VP Product, dLocal, who added that the e-commerce sector is witnessing progress on the payments side, too. So from checking the status of a flight to amenities in a particular flight such as Wi-Fi to checking in etc., one can buy trip essentials as well. Companies like Google acknowledge that designing conversations is quite tricky as human conversations are complicated.
Not all tech billionaires are advocates of artificial intelligence (AI). Some are so worried about the effects AI is having on society that they are spending their billions trying to monitor it. This, in turn, has created a new frontier in philanthropy. For Pierre Omidyar, the founder of eBay, AI is such a concern that last year he set up Luminate, a London-based organization that advocates for civic empowerment, data and digital rights, financial transparency, and independent media. Pierre Omidyar, the founder of eBay, has supported monitoring artificial intelligence.
Are we designing AI or is AI designing us? What does AI have anything to do with Art which is innately a human quality? Harshit takes us to a journey of his experiments with Art and AI and the results are fascinating. Harshit Agrawal is fondly known as first AI artist from India. He is also HCI researcher, poet and traveler who builds tools to study how technology can help enhance human creative expression.
Large scale search advertising systems have many challenges in Natural Language Understanding and Computer Vision areas such as query and ads understanding, semantic representation, fast ads retrieval and relevance modeling, product image understanding and product detection. In his insightful talk, Bruce Zhang from Microsoft AI & Research will walk us through these various challenges and share how the Microsoft team has developed and deployed cutting-edge technologies, based on deep learning and ads domain data, in their Ads stack to improve ad quality and increase Revenue Per 1000 search (RPM). In addition, he will also share deep learning techniques used in Bing Ads such as query/ads semantic embedding models and KNN search service, query tagging model, generative models for query rewriting, DNN based query-keyword relevance model, visual product recognition models, product detection and description generation models for Product Ads. Who is this talk for? If your work touches machine learning, this talk is for you.