buckner
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- North America > Canada > Quebec > Montreal (0.04)
- Health & Medicine > Therapeutic Area > Neurology (0.94)
- Health & Medicine > Diagnostic Medicine > Imaging (0.94)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Uncertainty > Bayesian Inference (0.93)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning (0.93)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.70)
- (2 more...)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- North America > Canada > Quebec > Montreal (0.04)
- Health & Medicine > Therapeutic Area > Neurology (0.94)
- Health & Medicine > Diagnostic Medicine > Imaging (0.94)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Uncertainty > Bayesian Inference (0.93)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning (0.93)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.70)
- (2 more...)
Vertical Federated Alzheimer's Detection on Multimodal Data
In the era of rapidly advancing medical technologies, the segmentation of medical data has become inevitable, necessitating the development of privacy preserving machine learning algorithms that can train on distributed data. Consolidating sensitive medical data is not always an option particularly due to the stringent privacy regulations imposed by the Health Insurance Portability and Accountability Act (HIPAA). In this paper, we introduce a HIPAA compliant framework that can train from distributed data. We then propose a multimodal vertical federated model for Alzheimer's Disease (AD) detection, a serious neurodegenerative condition that can cause dementia, severely impairing brain function and hindering simple tasks, especially without preventative care. This vertical federated model offers a distributed architecture that enables collaborative learning across diverse sources of medical data while respecting privacy constraints imposed by HIPAA. It is also able to leverage multiple modalities of data, enhancing the robustness and accuracy of AD detection. Our proposed model not only contributes to the advancement of federated learning techniques but also holds promise for overcoming the hurdles posed by data segmentation in medical research. By using vertical federated learning, this research strives to provide a framework that enables healthcare institutions to harness the collective intelligence embedded in their distributed datasets without compromising patient privacy.
- North America > United States > Texas > Travis County > Austin (0.14)
- North America > United States > New York > New York County > New York City (0.04)
Common Assumptions on Machine Learning Malfunctions Could be Wrong
Deep neural networks are one of the most fundamental aspects of artificial intelligence (AI), as they are used to process images and data through mathematical modeling. They are responsible for some of the greatest advancements in the field, but they also malfunction in various ways. These malfunctions can have either a small to non-existent impact, such as a simple misidentification, to a more dramatic and deadly one, such as a self-driving malfunction. New research coming out of the University of Houston suggests that our common assumptions on these malfunctions may be wrong, which could help evaluate the reliability of the networks in the future. The paper was published in Nature Machine Intelligence in November.
Misinformation or artifact: A new way to think about machine learning: A researcher considers when - and if - we should consider artificial intelligence a failure - IAIDL
They are capable of seemingly sophisticated results, but they can also be fooled in ways that range from relatively harmless -- misidentifying one animal as another -- to potentially deadly if the network guiding a self-driving car misinterprets a stop sign as one indicating it is safe to proceed. A philosopher with the University of Houston suggests in a paper published in Nature Machine Intelligence that common assumptions about the cause behind these supposed malfunctions may be mistaken, information that is crucial for evaluating the reliability of these networks. As machine learning and other forms of artificial intelligence become more embedded in society, used in everything from automated teller machines to cybersecurity systems, Cameron Buckner, associate professor of philosophy at UH, said it is critical to understand the source of apparent failures caused by what researchers call "adversarial examples," when a deep neural network system misjudges images or other data when confronted with information outside the training inputs used to build the network. They're rare and are called "adversarial" because they are often created or discovered by another machine learning network -- a sort of brinksmanship in the machine learning world between more sophisticated methods to create adversarial examples and more sophisticated methods to detect and avoid them. "Some of these adversarial events could instead be artifacts, and we need to better know what they are in order to know how reliable these networks are," Buckner said.
- Media > News (0.40)
- Information Technology > Security & Privacy (0.37)
Distributed Weight Consolidation: A Brain Segmentation Case Study
McClure, Patrick, Zheng, Charles Y., Kaczmarzyk, Jakub, Rogers-Lee, John, Ghosh, Satra, Nielson, Dylan, Bandettini, Peter A., Pereira, Francisco
Collecting the large datasets needed to train deep neural networks can be very difficult, particularly for the many applications for which sharing and pooling data is complicated by practical, ethical, or legal concerns. However, it may be the case that derivative datasets or predictive models developed within individual sites can be shared and combined with fewer restrictions. Training on distributed data and combining the resulting networks is often viewed as continual learning, but these methods require networks to be trained sequentially. In this paper, we introduce distributed weight consolidation (DWC), a continual learning method to consolidate the weights of separate neural networks, each trained on an independent dataset. We evaluated DWC with a brain segmentation case study, where we consolidated dilated convolutional neural networks trained on independent structural magnetic resonance imaging (sMRI) datasets from different sites. We found that DWC led to increased performance on test sets from the different sites, while maintaining generalization performance for a very large and completely independent multi-site dataset, compared to an ensemble baseline.
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- North America > Canada > Quebec > Montreal (0.04)
- Health & Medicine > Diagnostic Medicine > Imaging (1.00)
- Health & Medicine > Therapeutic Area > Neurology (0.94)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Uncertainty > Bayesian Inference (0.93)
- Information Technology > Artificial Intelligence > Machine Learning > Performance Analysis > Accuracy (0.68)
- Information Technology > Artificial Intelligence > Machine Learning > Learning Graphical Models > Directed Networks > Bayesian Learning (0.46)
Distributed Weight Consolidation: A Brain Segmentation Case Study
McClure, Patrick, Zheng, Charles Y., Kaczmarzyk, Jakub, Rogers-Lee, John, Ghosh, Satra, Nielson, Dylan, Bandettini, Peter A., Pereira, Francisco
Collecting the large datasets needed to train deep neural networks can be very difficult, particularly for the many applications for which sharing and pooling data is complicated by practical, ethical, or legal concerns. However, it may be the case that derivative datasets or predictive models developed within individual sites can be shared and combined with fewer restrictions. Training on distributed data and combining the resulting networks is often viewed as continual learning, but these methods require networks to be trained sequentially. In this paper, we introduce distributed weight consolidation (DWC), a continual learning method to consolidate the weights of separate neural networks, each trained on an independent dataset. We evaluated DWC with a brain segmentation case study, where we consolidated dilated convolutional neural networks trained on independent structural magnetic resonance imaging (sMRI) datasets from different sites. We found that DWC led to increased performance on test sets from the different sites, while maintaining generalization performance for a very large and completely independent multi-site dataset, compared to an ensemble baseline.
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- North America > Canada > Quebec > Montreal (0.04)
- Health & Medicine > Diagnostic Medicine > Imaging (1.00)
- Health & Medicine > Therapeutic Area > Neurology (0.94)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Uncertainty > Bayesian Inference (0.93)
- Information Technology > Artificial Intelligence > Machine Learning > Performance Analysis > Accuracy (0.68)
- Information Technology > Artificial Intelligence > Machine Learning > Learning Graphical Models > Directed Networks > Bayesian Learning (0.46)
The Human Brain Is a Time Traveler
Randy Buckner was a graduate student at Washington University in st. Louis in 1991 when he stumbled across one of the most important discoveries of modern brain science. For Buckner -- as for many of his peers during the early '90s -- the discovery was so counterintuitive that it took years to recognize its significance. Buckner's lab, run by the neuroscientists Marcus Raichle and Steven Petersen, was exploring what the new technology of PET scanning could show about the connection between language and memory in the human brain. The promise of the PET machine lay in how it measured blood flow to different parts of the brain, allowing researchers for the first time to see detailed neural activity, not just anatomy. In Buckner's study, the subjects were asked to recall words from a memorized list; by tracking where the brain was consuming the most energy during the task, Buckner and his colleagues hoped to understand which parts of the brain were engaged in that kind of memory. But there was a catch. Different regions of the brain vary widely in how much energy they consume no matter what the brain is doing; if you ask someone to do mental math while scanning her brain in a PET machine, you won't learn anything from that scan on its own, because the subtle changes that reflect the mental math task will be drowned out by the broader patterns of blood flow throughout the brain. To see the specific regions activated by a specific task, researchers needed a baseline comparison, a control. At first, this seemed simple enough: Put the subjects in the PET scanner, ask them to sit there and do nothing -- what the researchers sometimes called a resting state -- and then ask them to perform the task under study.
- North America > Canada > Ontario > Toronto (0.14)
- North America > United States > Illinois > Cook County > Chicago (0.05)
- North America > United States > Pennsylvania (0.04)
- North America > United States > Iowa (0.04)
- Health & Medicine > Therapeutic Area > Neurology (1.00)
- Health & Medicine > Diagnostic Medicine > Imaging (1.00)
- Health & Medicine > Health Care Technology (0.88)
Artificial intelligence helps reveal how people process abstract thought: Study of deep neural networks suggests knowledge comes via sensory experience
"As we rely more and more on these systems, it is important to know how they work and why," said Cameron Buckner, assistant professor of philosophy and author of a paper exploring the topic published in the journal Synthese. Better understanding how the systems work, in turn, led him to insights into the nature of human learning. Philosophers have debated the origins of human knowledge since the days of Plato -- is it innate, based on logic, or does knowledge come from sensory experience in the world? Deep Convolutional Neural Networks, or DCNNs, suggest human knowledge stems from experience, a school of thought known as empiricism, Buckner concluded. These neural networks -- multi-layered artificial neural networks, with nodes replicating how neurons process and pass along information in the brain -- demonstrate how abstract knowledge is acquired, he said, making the networks a useful tool for fields including neuroscience and psychology.
Artificial intelligence helps reveal how people process abstract thought
As artificial intelligence becomes more sophisticated, much of the public attention has focused on how successfully these technologies can compete against humans at chess and other strategy games. A philosopher from the University of Houston has taken a different approach, deconstructing the complex neural networks used in machine learning to shed light on how humans process abstract learning. "As we rely more and more on these systems, it is important to know how they work and why," said Cameron Buckner, assistant professor of philosophy and author of a paper exploring the topic published in the journal Synthese. Better understanding how the systems work, in turn, led him to insights into the nature of human learning. Philosophers have debated the origins of human knowledge since the days of Plato - is it innate, based on logic, or does knowledge come from sensory experience in the world?