Phantom Of The Operator: Self-Driving Tech's Slowing Timetable Creates Opening For This Monitoring And Guidance Startup

#artificialintelligence

A remote Phantom Auto operator monitors a Postmates delivery robot. The 2020s may yet be the decade of self-driving cars, but early predictions from automakers and tech developers including Tesla, Nissan, Nvidia and Ford that autonomous vehicles would be ready as soon as this year or next don't seem to be panning out. This week auto supply giant Magna ended a tech alliance with Lyft on self-driving robo-taxis owing to a slower-than-anticipated timetable. But the billions of dollars that have been poured into R&D and development of advanced sensors and computing the past few years are being leveraged for near-term applications, including delivery robots and self-driving trucks, as well as autonomous warehouse, cleaning and security bots. And as those vehicles proliferate, there's an increasing need to keep track of them, monitor their operations, provide remote guidance in some cases or even, in very limited circumstances, drive them remotely.


Unsupervised Distribution Learning for Lunar Surface Anomaly Detection

#artificialintelligence

In this work we show that modern data-driven machine learning techniques can be successfully applied on lunar surface remote sensing data to learn, in an unsupervised way, sufficiently good representations of the data distribution to enable lunar technosignature and anomaly detection. In particular we train an unsupervised distribution learning neural network model to find the Apollo 15 landing module in a testing dataset, with no dataset specific model or hyperparameter tuning. Sufficiently good unsupervised data density estimation has the promise of enabling myriad useful downstream tasks, including locating lunar resources for future space flight and colonization, finding new impact craters or lunar surface reshaping, and algorithmically deciding the importance of unlabeled samples to send back from power- and bandwidth-constrained missions. We show in this work that such unsupervised learning can be successfully done in the lunar remote sensing and space science contexts. Please follow SpaceRef on Twitter and Like us on Facebook.


Applying Deep Learning to Localization Microscopy

#artificialintelligence

Modern science requires modern technological solutions. As we prise the natural world apart in search of answers to ever more complex questions, we need to be thinking in new ways about our approach to the problems we are faced with. Several technologies have been developed over the past few years that are pushing the boundaries of our scientific knowledge to new heights. As these technologies develop scientists are looking into ways of using them in tandem, to produce more accurate results and new ways of approaching the problems of the modern scientific industry. Two such technologies that can be combined to produce a better understanding of biological systems are localization microscopy and deep learning.


Brain magnetic resonance imaging enhanced through artificial intelligence: Study

#artificialintelligence

Researchers have designed an unprecedented method that is capable of improving brain images obtained through magnetic resonance imaging using artificial intelligence. This new model manages to increase image quality from low resolution to high resolution without distorting the patients' brain structures, using a deep learning artificial neural network -a model that is based on the functioning of the human brain- that "learns" this process. The study was published in the scientific journal Neurocomputing. "Deep learning is based on very large neural networks, and so is its capacity to learn, reaching the complexity and abstraction of a brain", explains researcher Karl Thurnhofer, main author of this study, who adds that thanks to this technique, the activity of identification can be performed alone, without supervision; an identification effort that the human eye would not be capable of doing. This study represents a scientific breakthrough, since the algorithm developed by the UMA yields more accurate results in less time, with clear benefits for patients.


Christmas gift ideas 2019: 20 great tech gifts for the whole family

#artificialintelligence

Christmas is just around the corner, which means it's time to start planning your presents. Finding the perfect gift for your loved ones can be tricky, but don't worry, TechRadar is here to help you plan ahead. There's nothing like watching the people you care about erupt into smiles as they tear off your wrapping, and are greeted with a gift they actually love. So if you want to leave a lasting impression, the latest tech gadget can do just that. Technology is evolving so quickly that if you decided on a gizmo last year, there's always something new to choose from this year.


Uber Creates Generative Teaching Networks to Better Train Deep Neural Networks

#artificialintelligence

A common analogy in artificial intelligence(AI) circles is that training data is the new oil for machine learning models. Just like the precious commodity, training data is scarce and hard to get at scale. While these type of models are relatively easy to create compare to other alternatives, they have a strong dependency in training data that results prohibited for most organizations. This problem becomes bigger with the scale of the machine learning models. Recently, Uber engineers published a paper proposing a new method called Generative Teaching Networks(GTNs) that create learning algorithms that automatically generate training data.





Debugging a Machine Learning model written in TensorFlow and Keras

#artificialintelligence

In this article, you get to look over my shoulder as I go about debugging a TensorFlow model. I did a lot of dumb things, so please don't judge. You can see the final (working) model on GitHub. I'm building a model to predict lightning 30 minutes into the future and plan to present it at the American Meteorological Society. A model trained in this way can be used to predict lightning 30 minutes ahead in real-time given the current infrared and GLM data. I wrote up a convnet model borrowing liberally from the training loop of the ResNet model written for the TPU and adapted the input function (to read my data, not JPEG) and the model (a simple convolutional network, not ResNet).