Emulators speed up simulations, such as this NASA aerosol model that shows soot from fires in Australia. Modeling immensely complex natural phenomena such as how subatomic particles interact or how atmospheric haze affects climate can take many hours on even the fastest supercomputers. Emulators, algorithms that quickly approximate these detailed simulations, offer a shortcut. Now, work posted online shows how artificial intelligence (AI) can easily produce accurate emulators that can accelerate simulations across all of science by billions of times. "This is a big deal," says Donald Lucas, who runs climate simulations at Lawrence Livermore National Laboratory and was not involved in the work.
AI has been one of the biggest buzzwords in the technology industry over the past few years, given its immense potential to transform our world. With more tasks being performed with AI, the enterprise adoption of this nascent technology is rapidly evolving. From business planning and forecasting to predictive maintenance and customer service, AI is now an intrinsic part of an enterprise ecosystem. The potential of AI is limitless, but certain barriers are holding traditional large enterprises back from embracing AI in a big way. These include factors such as the absence of a clear strategy, lack of data, skills shortage, and functional silos within the organization.
Patients with an acute medical illness have an increased risk of venous thromboembolism (VTE) during hospitalization that persists following discharge.1, 2 Several randomized trials have demonstrated the efficacy of VTE prophylaxis with direct oral anticoagulants (DOACs) compared to low‐molecular‐weight heparin for 6 to 14 days.3-5 Based on the results of the APEX trial, the US Food and Drug Administration has licensed betrixaban for first‐line thromboprophylaxis in acute medically ill patients at high risk for VTE. The identification of these high‐risk patients may be determined clinically or by use of risk assessment models (RAMs) that rely on integer‐based scoring systems of known risk factors.6, These RAMs demonstrated modest performance in validation data sets.11-13 Machine learning algorithms are constructed to search for patterns in data that provide maximum predictive ability.14, 15 These learning methods have demonstrated superiority to traditional diagnostic and prognostic tools in various domains.16-19
How many times have you seen a video being badly cropped when you watch it on a mobile device? It's quite frustrating and annoying, and most of the time, there's not much you can do about it. To address this problem, Google's AI team has developed an open-source solution, Autoflip, that reframes the video that suits the target device or dimension (landscape, square, portrait, etc.). Autoflip works in three stages: Shot (scene) detection, video content analysis, and reframing. The first part is scene detection, in which the machine learning model needs to detect the point before a cut or a jump from one scene to another.
A research group from Politecnico di Milano has developed a new computing circuit that can execute advanced operations, typical of neural networks for artificial intelligence, in one single operation. The circuit performance in terms of speed and energy consumption paves the way for a new generation of artificial intelligence computing accelerators that are more energy efficient and more sustainable on a global scale. The study has been recently published in the prestigious Science Advances. Recognizing a face or an object, or correctly interpreting a word or a musical tune are operations that are today possible on the most common electronic gadgets, such as smartphones and tablets, thanks to artificial intelligence. For this to happen, complicated neural networks needs to be appropriately trained, which is so energetically demanding that, according to some studies, the carbon footprint that derives from the training of a complex neural network can equal the emission of 5 cars throughout their whole life cycle.
GPS and similar navigational systems rely on orbiting satellites to triangulate users' locations, a process that's inherently susceptible to inaccuracy due to the vast distances between satellites and moving users on the ground. But Apple thinks it can improve location accuracy by applying machine learning to Kalman estimation filters, a just-published patent application reveals. The basic concept is that while navigation systems generally rely on live location-determining pings from multiple satellites -- a process that can take precious time, during which the user may move -- a machine learning model can be trained to provide interim location estimates for the user based on previously gathered data from the environment. For instance, a given city block might have fairly constant satellite signal reflection characteristics, commonly introducing errors into user location readings, so machine learning could counterbalance the inaccuracies. While GPS is the best-known satellite location system, Apple's application goes beyond it to include various types of global navigation satellite systems (GNSS), assuming in each case that each triangulation of raw satellite data and the machine learning-corrected version will be handed off to a Kalman linear quadratic estimation filter.
Heralded as an easy fix for health services under pressure, data technology is marching ahead unchecked. But is there a risk it could compound inequalities? When Adewole Adamson received a desperate call at his Texas surgery one afternoon in January 2018, he knew something was up. The call was not from a patient, but from someone in Maryland who wanted to speak to the dermatologist and assistant professor in internal medicine at Dell Medical School in the University of Texas about black people and skin cancer. Over the next few weeks, over a series of phone calls, Adamson would learn a lot about the caller.
The lab has designed an easily reconfigurable room, the size of a cramped studio, to be the staging ground for all 14 apartment variations. It has also re-created identical virtual replicas in Unity, a popular video-game engine--as well as 75 other configurations--that have all been open-sourced online. Together, these 89 total configurations will offer realistic simulation environments for teams around the world to train and test their navigation algorithms. The environments also come pre-loaded with models of AI2's robots and mirror real-world physics like gravity and light reflections as closely as possible.
The maritime and scientific communities have set themselves the ambitious target of 2030 to map Earth's entire ocean floor. You can argue about the numbers but it's in the region of 80% of the global seafloor that's either completely unknown or has had no modern measurement applied to it. The international GEBCO 2030 project was set up to close the data gap and has announced a number of initiatives to get it done. What's clear, however, is that much of this work will have to leverage new technologies or at the very least max the existing ones. Which makes the news from Ocean Infinity - that it's creating a fleet of ocean-going robots - all the more interesting.
Doctors have used a robot to perform extremely delicate surgical operations on breast cancer patients in the first human trial of the technology. Eight women had the robot-assisted procedure at Maastricht University Medical Center, in the Netherlands, to alleviate a common complication of breast cancer surgery. The robot helped a specialist surgeon divert thread-like lymphatic vessels, as narrow as 0.3mm, around scar tissue in the patients' armpits, and connect them to nearby blood vessels. The operation, which requires immense care and precision, is offered to some breast cancer patients to reduce swelling in the arms that builds up when the lymphatic system cannot drain properly. Because the vessels are so small, surgeons need exceptionally steady hands to perform the operation well.