AAAI AI-Alert for Jul 5, 2022
When machine learning meets surrealist art meets Reddit, you get DALL-E mini
An image of babies doing parkour generated by DALL-E mini. An image of babies doing parkour generated by DALL-E mini. DALL-E mini is the AI bringing to life all of the goofy "what if" questions you never asked: What if Voldemort was a member of Green Day? What if there was a McDonald's in Mordor? What if scientists sent a Roomba to the bottom of the Mariana Trench?
DeepMind's AI develops popular policy for distributing public money
Could artifical intelligence make better funding decisions than senators? A "democratic" AI system has learned how to develop the most popular policy for redistributing public money among people playing an online game. "Many of the problems that humans face are not merely technological, but require us to coordinate in society and in our economies for the greater good," says Raphael Koster at UK-based AI company DeepMind. "For AI to be able to help, it needs to learn directly about human values." The DeepMind team trained its artificial intelligence to learn from more than 4000 people as well as from computer simulations in an online, four-player economic game.
Using GPUs to Discover Human Brain Connectivity - Neuroscience News
Summary: Researchers developed a new GPU-based machine learning algorithm to help predict the connectivity of networks within the brain. A new GPU-based machine learning algorithm developed by researchers at the Indian Institute of Science (IISc) can help scientists better understand and predict connectivity between different regions of the brain. The algorithm, called Regularized, Accelerated, Linear Fascicle Evaluation, or ReAl-LiFE, can rapidly analyse the enormous amounts of data generated from diffusion Magnetic Resonance Imaging (dMRI) scans of the human brain. Using ReAL-LiFE, the team was able to evaluate dMRI data over 150 times faster than existing state-of-the-art algorithms. "Tasks that previously took hours to days can be completed within seconds to minutes," says Devarajan Sridharan, Associate Professor at the Centre for Neuroscience (CNS), IISc, and corresponding author of the study published in the journal Nature Computational Science.
Behind the scenes of Waymo's worst automated truck crash โ TechCrunch - Channel969
Probably the most critical crash thus far involving a self-driving truck may need resulted in solely average accidents, but it surely uncovered how unprepared native authorities and legislation enforcement are to take care of the brand new expertise. On Might 5, a Class 8 Waymo By way of truck working in autonomous mode with a human security operator behind the wheel was hauling a trailer northbound on Interstate 45 towards Dallas, Texas. At 3:11 p.m., simply outdoors Ennis, the modified Peterbilt was touring within the far proper lane when a passing truck and trailer combo entered its lane. The motive force of the Waymo By way of truck informed police that the opposite semi truck continued to maneuver into the lane, forcing Waymo's truck and trailer off the roadway. She was later taken to a hospital for accidents that Waymo described in its report back to the Nationwide Freeway Visitors Security Administration as "average."
Aurora's Autonomous Tests in Texas Keep Rolling
After lumbering through a gravel parking lot like a big blue bull, one of Aurora Innovation Inc.'s self-driving truck prototypes took a wide right turn onto a frontage road near Dallas. The steering wheel spun through the half-clasped hands of its human operator, whose touch may not be needed much longer. Fittingly for Texas, these Peterbilts are adorned with a sensor display above the windshield that looks much like a set of longhorns. This was the beginning of a 28-mile jaunt up and down Interstate 45 toward Houston in a truck with a computer for a brain, and cameras, radar and lidar sensors for eyes, capturing objects more than 437 yards out in all directions. The stakes for test drives like this one are incredibly high for the future of freight.
Brian Gerkey on the success of Open Robotics and ROS - Channel969
Welcome to Episode 84 of The Robotic Report Podcast, which brings conversations with robotics innovators straight to you. Be a part of us every week for discussions with main roboticists, modern robotics corporations, and different key members of the robotics neighborhood. Our visitor this week is Brian Gerkey, CEO and co-founder of Open Robotics and certainly one of creators of ROS. Brian tells us concerning the improvement and evolution of the Robotic Working System (ROS) and why open supply software program has performed such a pivotal function within the development of the robotics trade and within the acceleration of robotics analysis in college and company robotic labs around the globe. Now it's time to organize for RoboBusiness and the Discipline Robotics Engineering Discussion board, which run October 19-20, 2022 in Santa Clara, Calif If you want to be a visitor on an upcoming episode of the podcast, or you probably have suggestions for future friends or phase concepts, contact Steve Crowe or Mike Oitzman.
EBay Uses Machine Learning to Refine Promoted Listings
Online marketplace eBay incorporated additional buying signals such as "Add to Watchlist," "Make Offer," and "Add to Cart" into its machine learning model to improve the relevance of recommended ad listings, based on the initial items searched for. Chen Xue goes into great detail in this recent article. EBay's Promoted Listings Standard (PLS) is a paid option for sellers. With one option, PLSIM, eBay's recommendation engines suggest sponsored items similar to something a potential buyer just clicked on. The PLSIM is paid on a CPA model (the seller pays eBay only when a sale is made) so that can be very motivating in terms of creating the most effective model to promote the best listings.
Sinequa adds a neural search function to boost its enterprise platform
Sinequa said its neural search function can answer natural language questions, thanks to four deep learning models it developed with Microsoft Azure and Nvidia teams. Enterprise search company Sinequa is adding a neural search option to its platform with the aim of giving improved accuracy and relevance to customers. Sinequa said the new AI function is the first commercially available system to use four deep learning language models. Combined with the platform's natural language processing and semantic search abilities, Sinequa said this will lead to improved question-answering and search relevance. The Sinequa Search Cloud platform is designed to help employees find relevant information and insights from all enterprise sources in any language in the context of their work.
AI's progress isn't the same as creating human intelligence in machines
Data-centric AI, on the other hand, began in earnest in the 1970s with the invention of methods for automatically constructing "decision trees" and has exploded in popularity over the last decade with the resounding success of neural networks (now dubbed "deep learning"). Data-centric artificial intelligence has also been called "narrow AI" or "weak AI," but the rapid progress over the last decade or so has demonstrated its power. Deep-learning methods, coupled with massive training data sets plus unprecedented computational power, have delivered success on a broad range of narrow tasks from speech recognition to game playing and more. The artificial-intelligence methods build predictive models that grow increasingly accurate through a compute-intensive iterative process. In previous years, the need for human-labeled data to train the AI models has been a major bottleneck in achieving success.