The digital revolution is built on a foundation of invisible 1s and 0s called bits. As decades pass, and more and more of the world's information and knowledge morph into streams of 1s and 0s, the notion that computers prefer to "speak" in binary numbers is rarely questioned. According to new research from Columbia Engineering, this could be about to change. A new study from Mechanical Engineering Professor Hod Lipson and his PhD student Boyuan Chen proves that artificial intelligence systems might actually reach higher levels of performance if they are programmed with sound files of human language rather than with numerical data labels. The researchers discovered that in a side-by-side comparison, a neural network whose "training labels" consisted of sound files reached higher levels of performance in identifying objects in images, compared to another network that had been programmed in a more traditional manner, using simple binary inputs.
According to a new study in the journal Nature Materials, researchers from Stanford University have harnessed the power of machine learning technology to reverse long-held suppositions about the way lithium-ion batteries charge and discharge, providing engineers with a new list of criteria for making longer-lasting battery cells. This is the first time machine learning has been coupled with knowledge obtained from experiments and physics equations to uncover and describe how lithium-ion batteries degrade over their lifetime. Machine learning accelerates analyses by finding patterns in large amounts of data. In this instance, researchers taught the machine to study the physics of a battery failure mechanism to design superior and safer fast-charging battery packs. Fast charging can be stressful and harmful to lithium-ion batteries, and resolving this problem is vital to the fight against climate change.
If you've been a reader of Sprudge for any reasonable amount of time, you've no doubt by now ready multiple articles about how coffee is potentially beneficial for some particular facet of your health. The stories generally go like this: "a study finds drinking coffee is associated with a X% decrease in [bad health outcome]" followed shortly by "the study is observational and does not prove causation." In a new study in the American Heart Association's journal Circulation: Heart Failure, researchers found a link between drinking three or more cups of coffee a day and a decreased risk of heart failure. This study used machine learning to get to its conclusion, and it may significantly alter the utility of this sort of study in the future. As reported by the New York Times, the new study isn't exactly new at all.
Register for a free or VIP pass today. The past several years have made it clear that AI and machine learning are not a panacea when it comes to fair outcomes. Applying algorithmic solutions to social problems can magnify biases against marginalized peoples; undersampling populations always results in worse predictive accuracy. But bias in AI doesn't arise from the datasets alone. Problem formulation, or the way researchers fit tasks to AI techniques, can contribute.
Before March 2020, in-person events vastly outnumbered virtual meetings, and the sudden reversal of those fortunes has yielded new information about best practices. In "the before times," skilled trainers, speakers and facilitators could look directly at participants, read their body language and see if the messages were resonating. In-person meetings encouraged participants to stay focused and engaged. Now presenters talk to their own face on a screen, and participants mute themselves and turn their cameras off. Two-way communication is critical in professional development experiences.
Transformer architectures have become the building blocks for many state-of-the-art natural language processing (NLP) models. While transformers are certainly powerful, researchers' understanding of how they actually work remains limited. This is problematic due to the lack of transparency and the possibility of biases being inherited via training data and algorithms, which could cause models to produce unfair or incorrect predictions. In the paper Transformer Visualization via Dictionary Learning: Contextualized Embedding as a Linear Superposition of Transformer Factors, a Yann LeCun team from Facebook AI Research, UC Berkeley and New York University leverages dictionary learning techniques to provide detailed visualizations of transformer representations and insights into the semantic structures -- such as word-level disambiguation, sentence-level pattern formation, and long-range dependencies -- that are captured by transformers. Previous attempts to visualize and analyze this "black box" issue in transformers include direct visualization and, more recently, "probing tasks" designed to interpret transformer models.
AI is being embedded into an increasing number of technologies that are commonly found inside most chips, and initial results show dramatic improvements in both power and performance. Unlike high-profile AI implementations, such as self-driving cars or natural language processing, much of this work flies well under the radar for most people. It generally takes the path of least disruption, building on or improving technology that already exists. But in addition to having a significant impact, these developments provide design teams with a baseline for understanding what AI can and cannot do well, how it behaves over time and under different environmental and operating conditions, and how it interacts with other systems. Until recently, the bulk of AI/machine learning has been confined to the data center or specialized mil/aero applications. It has since begun migrating to the edge, which itself is just beginning to take form, driven by a rising volume of data and the need to process that data closer to the source.
Although intelligent and adept at image recognition and classification, deep neural networks can still be vulnerable to adversarial perturbations, i.e., small but queer details in an image that causes errors in neural network output. Some of these are universal. They tend to interfere with the neural network when placed on any input. A research paper presented at the 35th AAAI Conference on Artificial Intelligence by researchers at Skoltech demonstrated that patterns that cause neural networks to make mistakes in image recognition are, in fact, similar to Turing patterns found all around the Neural network world. This result can help design defenses for pattern recognition systems that are currently susceptible to attacks.
The rise of precision medicine is being augmented by greater use of deep learning technologies that provide predictive analytics for earlier diagnosis of a range of debilitating diseases. The latest example comes from researchers at Michigan-based Beaumont Health who used deep learning to analyze genomic DNA. The resulting simple blood test could be used to detect earlier onset of Alzheimer's disease. In a study published this week in the peer-reviewed scientific journal PLOS ONE, the researchers said their analysis discovered 152 "significant" genetic differences among Alzheimer's and healthy patients. Those biomarkers could be used to provide diagnoses before Alzheimer's symptoms develop and a patient's brain is irreversibly damaged.
The field of AI and machine learning is arguably built on the shoulders of a few hundred papers, many of which draw conclusions using data from a subset of public datasets. Large, labeled corpora have been critical to the success of AI in domains ranging from image classification to audio classification. That's because their annotations expose comprehensible patterns to machine learning algorithms, in effect telling machines what to look for in future datasets so they're able to make predictions. But while labeled data is usually equated with ground truth, datasets can -- and do -- contain errors. The processes used to construct corpora often involve some degree of automatic annotation or crowdsourcing techniques that are inherently error-prone.