Four years ago, mathematician Vlad Voroninski saw an opportunity to remove some of the bottlenecks in the development of autonomous vehicle technology thanks to breakthroughs in deep learning. Now, Helm.ai, the startup he co-founded in 2016 with Tudor Achim, is coming out of stealth with an announcement that it has raised $13 million in a seed round that includes investment from A.Capital Ventures, Amplo, Binnacle Partners, Sound Ventures, Fontinalis Partners and SV Angel. More than a dozen angel investors also participated, including Berggruen Holdings founder Nicolas Berggruen, Quora co-founders Charlie Cheever and Adam D'Angelo, professional NBA player Kevin Durant, Gen. David Petraeus, Matician co-founder and CEO Navneet Dalal, Quiet Capital managing partner Lee Linden and Robinhood co-founder Vladimir Tenev, among others. Helm.ai will put the $13 million in seed funding toward advanced engineering and R&D and hiring more employees, as well as locking in and fulfilling deals with customers. Helm.ai is focused solely on the software.
Self-driving cars, meet your nemesis: the London roundabout. This strange piece of geometry, with tentacles shooting off at odd angles and cars nudging into impossible spaces, is one of the many headaches that will plague computer brains as the city's autonomous vehicle (AV) trials accelerate. In the US, Waymo and others boast fleets of self-driving cars that have racked up millions of miles of public road trials, across more than 25 cities. Billions of dollars of investment is flowing into AV units run by Uber and General Motors. Tesla is making bold promises about "robo taxis", and Ford plans to start building AVs in 2021.
Provided by Gizmodo Australia Photo: Ohio State University" data-portal-copyright "Photo: Ohio State University" There's still no completely safe and surefire method for locating unexploded ordinance after a war is over, but researchers at Ohio State University have found a way to harness image processing algorithms, powered by machine learning, to study satellite imagery and locate hot spots where UXO are likely to be located. The researchers focused their efforts on a 100-square-kilometre area near Kampong Trabaek, Cambodia, which was the target of carpet-bombing missions carried out by the United States Air Force during the Vietnam War. The team was given access to declassified military data that revealed that 3,205 bombs had been dropped in the area between 1970 and 1973. Determining exactly how many of those bombs didn't explode has gotten harder and harder as, six decades later, nature has slowly reclaimed the country's heaviest hit areas, hiding and obscuring the craters that are counted and used to make accurate estimates. The OSU study used a two-step process to come up with a more accurate estimate of how many bombs were still left in the area.
A new study shows that an artificial intelligence (AI) method that fuses medically relevant information enables critical circulatory failure to be predicted in the intensive care unit (ICU) several hours before it occurs. Developed at the Swiss Federal Institute of Technology (ETH; Zurich, Switzerland) and Bern University Hospital (Inselspital; Switzerland), the early-warning platform integrates measurements from multiple systems using a high-resolution database that holds 240 patient-years of data. For the study, the researchers used anonymized data from 36,000 admissions to ICUs, and were able to show that just 20 of these variables, including blood pressure, pulse, various blood values, the patient's age, and medications administered were sufficient to make accurate predictions. In a trial run of the algorithms developed, they were able to predict 90% of circulatory-failure events, with 82% of them identified more than two hours in advance. On average, the system raised 0.05 alarms per patient and hour.
Models and algorithms for analyzing complex networks are widely used in research and affect society at large through their applications in online social networks, search engines, and recommender systems. According to a new study, however, one widely used algorithmic approach for modeling these networks is fundamentally flawed, failing to capture important properties of real-world complex networks. "It's not that these techniques are giving you absolute garbage. They probably have some information in them, but not as much information as many people believe," said C. "Sesh" Seshadhri, associate professor of computer science and engineering in the Baskin School of Engineering at UC Santa Cruz. Seshadhri is first author of a paper on the new findings published in Proceedings of the National Academy of Sciences.
The COVID-19 pandemic accelerates an automated future that's already on its way. It serves as a wake-up call to all AI, robotics, and driverless car startups: stop building eye-dazzling demos and talking about the future possibility of general-use AI. Instead, focus on deploying real-world solutions that can run 24 hours a day with minimum human intervention and deliver true value to users. Thousands of Americans have started to work from home amidst the current pandemic. Retailers have struggled with supply while nervous consumers are hoarding everything from toilet paper to hand soap.
Facebook announced that it is releasing DeepFovea, a new state-of-the-art foveate rendering using AI technology. Engineers at the Facebook Reality Labs have come up with an imagery assistant for creating a "plausible peripheral image" rather than the actual peripheral imagery, which in reality is hazy and unfocused as the gaze is focused on something else. This image rendering is called Foveated Reconstruction, which is done by a 14 times compression of pixels on the RGB (Red, blue, Green) video without compromising on the quality, and which is realistic and gaze-contingent. DeepFovea is one of the first generative adversarial network (GAN) able to produce natural video sequences, say the facebook developers of the technology. "DeepFovea can decrease the amount of compute resources needed for rendering by as much as 10-14x while any image differences remain imperceptible to the human eye," according to Facebook.
Nanostructured layers boast countless potential properties--but how can the most suitable one be identified without any long-term experiments? A team from the Materials Discovery Department at Ruhr-Universität Bochum (RUB) has ventured a shortcut: using a machine learning algorithm, the researchers were able to reliably predict the properties of such a layer. Their report was published in the new journal Communications Materials from 26 March 2020. During the manufacture of thin films, numerous control variables determine the condition of the surface and, consequently, its properties. Relevant factors include the composition of the layer as well as process conditions during its formation, such as temperature.
Roundup Let's get cracking with some machine-learning news. Starksy Robotics is no more: Self-driving truck startup Starsky Robotics has shut down after running out of money and failing to raise more funds. CEO Stefan Seltz-Axmacher bid a touching farewell to his upstart, founded in 2016, in a Medium post this month. He was upfront and honest about why Starsky failed: "Supervised machine learning doesn't live up to the hype," he declared. Neural networks only learn to pick up on certain patterns after they are faced with millions of training examples.
Miso Robotics, a startup developing robots that can perform basic cooking tasks in commercial kitchens, today announced that it has deployed new tools to its platform in CaliBurger restaurants as part of an advanced approach with CaliGroup intended to improve safety and health standards. The hope is to minimize the threat of infection for patrons and delivery workers during the COVID-19 pandemic, which has sickened hundreds of thousands of people worldwide. In the coming weeks, in partnership with payment provider PopID, Miso will install a thermal-based screening device in a CaliBurger location in Pasadena, California, that attaches to doors to measure the body temperatures of people attempting to enter the restaurant, along with Miso's Flippy robot in the kitchen, to address health concerns. Before entering, the staff, delivery drivers, and guests will have to scan their faces, and if the device sensor detects the person has a fever, they won't be allowed to enter the building. Miso says that store owners will be able to opt into text messages alerting them that someone whose temperature reading is in line with health and safety standards is at the door, at which point employees will be able to open the door manually.