Goto

Collaborating Authors

Machine Learning


Fighting Climate Change with Data from Space and AI

#artificialintelligence

Artificial intelligence (AI) has entered its Golden Age. Machine learning requires more data to provide compelling insights on how to optimize human activity. Landsat 9 will fill the gap and feed invaluable information into the most powerful AI recommender, predictive, and classifications systems ever. Artificial intelligence (AI) has entered its Golden Age. Machine learning requires more data to provide compelling insights on how to optimize human activity.


A Machine Learning Pipeline to Examine Political Bias with Congressional Speeches

#artificialintelligence

Machine learning, with advancements in natural language processing and deep learning, has been actively used in studying political bias on social media. But the key challenge to model political bias is the requirement of human effort to label the seed social media posts to train machine learning models. Although very effective, this approach has disadvantages in the time-consuming data labeling process and the cost to label significant data for machine learning models is significantly higher. The web offers invaluable data on political bias starting from biased news media outlets publishing articles on socio-political issues to biased user discussions about several topics in multiple social forums. In this work, we introduce a novel approach to label political bias for social media posts directly from US congressional speeches without any human intervention for downstream machine learning models.


PDBench: Evaluating Computational Methods for Protein Sequence Design

#artificialintelligence

Proteins perform critical processes in all living systems: converting solar energy into chemical energy, replicating DNA, as the basis of highly performant materials, sensing and much more. While an incredible range of functionality has been sampled in nature, it accounts for a tiny fraction of the possible protein universe. If we could tap into this pool of unexplored protein structures, we could search for novel proteins with useful properties that we could apply to tackle the environmental and medical challenges facing humanity. This is the purpose of protein design. Sequence design is an important aspect of protein design, and many successful methods to do this have been developed.


Deep Learning for Computer Vision with TensorFlow 2

#artificialintelligence

You will find in this course a consice review of the theory with intuitive concepts of the algorithms, and you will be able to put in practice your knowledge with many practical examples using your own datasets.


Precision and Recall

#artificialintelligence

This blog is to introduce some important classifier metrics: precision and recall. The precision of the classifier is the accuracy of the positive predictions. Another metric, recall, also called sensitivity or the true positive rate (TPR), is the ratio of positive instances that are correctly detected by the classifier. To compare binary classifiers, it is convenient to use the F1 score, which is the harmonic mean of precision and recall. Whereas the regular mean treats all values equally, the harmonic mean gives much more weight to low values.


Machine learning at the edge: A hardware and software ecosystem

#artificialintelligence

The idea of taking compute out of the data center, and bringing it as close as possible to where data is generated, is seeing lots of traction. Estimates for edge computing growth are in the 40% CAGR, $50 billion area. Increasingly, data generated at the edge are used to feed applications powered by machine learning models. TinyML is a fast-growing field of machine learning technologies and applications that enable machine learning to work at the edge. It includes hardware, algorithms and software capable of performing on-device sensor data analytics at extremely low power, hence enabling a variety of always-on use-cases.


Anti-Neuron Watermarking: Protecting Personal Data Against Unauthorized Neural Model Training

#artificialintelligence

Given suspicious neural network models, verifiers determine whether the models were trained using watermarked data following eq. During signature inference, each image is converted from [0,255] into [0,1], then watermarked and normalized. Given clean (unwatermarked) images from users, watermark signature can be recovered using one of the following two approaches. If the signature is well memorized by the classifier models, loss will reach minimum when the current enumerated signature equals or closely approximates to the signature used by the user. The signatures are generated by dividing the whole signature space into N 2τ intervals.


Mathematical Foundations of Machine Learning

#artificialintelligence

Mathematics forms the core of data science and machine learning. Thus, to be the best data scientist you can be, you must have a working understanding of the most relevant math. Getting started in data science is easy thanks to high-level libraries like Scikit-learn and Keras. But understanding the math behind the algorithms in these libraries opens an infinite number of possibilities up to you. From identifying modeling issues to inventing new and more powerful solutions, understanding the math behind it all can dramatically increasing the impact you can make over the course of your career.


Latest Research From NVIDIA AI and Mass General Brigham Explains The Importance of Federated Learning in Medical AI and Other Industries

#artificialintelligence

Federated learning is a new way to train artificial intelligence models with data from multiple sources while maintaining anonymity. This removes many barriers and opens up the possibility for even more sharing in machine learning research. The latest results published in Nature Medicine show promising new research wherein the federated learning models build powerful AI models that can be generalized among healthcare institutions. These findings are currently for the healthcare industry. It shows that further down the line, it could have a significant role in energy, financial services, and manufacturing applications.


What Green AI Needs

#artificialintelligence

LONDON – Long before the real-world effects of climate change became so abundantly obvious, the data painted a bleak picture – in painful detail – of the scale of the problem. For decades, carefully collected data on weather patterns and sea temperatures were fed into models that analyzed, predicted, and explained the effects of human activities on our climate. And now that we know the alarming answer, one of the biggest questions we face in the next few decades is how data-driven approaches can be used to overcome the climate crisis. Data and technologies like artificial intelligence (AI) are expected to play a very large role. But that will happen only if we make major changes in data management.