Collaborating Authors


Machine learning makes its mark on medical imaging and therapy – Physics World


Artificial intelligence has potential to improve the operation of many essential tasks in various fields of medicine and biomedicine – from dealing with the massive amount of data generated by medical imaging, to understanding the evolution of cancer in the body, to helping design and optimize patient treatments. At last week's APS March Meeting, a dedicated focus session examined some of the latest medical applications of artificial intelligence and machine learning. Opening the session, Alison Deatsch from the University of Wisconsin, Madison, discussed the use of deep learning for diagnosing and monitoring brain disease. "Brain disorders and neurodegenerative disease are some of the most costly diseases, both in terms of human suffering and economic costs," she explained. The reason is that most of these conditions – which include Alzheimer's and Parkinson's disease, autism spectrum disorder and mild cognitive impairment (MCI), among others – lack reliable tools for diagnosis and progression monitoring and, as such, are often misdiagnosed.

Machine Learning with PyTorch and Scikit-Learn


My name is Sebastian, and I am a machine learning and AI researcher with a strong passion for education. As Lead AI Educator at, I am excited about making AI & deep learning more accessible and teaching people how to utilize AI & deep learning at scale. I am also an Assistant Professor of Statistics at the University of Wisconsin-Madison and author of the bestselling book Python Machine Learning.

🇺🇸 Machine learning job: Senior Machine Learning Software Engineer at ColdQuanta (Madison, Wisconsin, United States)


Senior Machine Learning Software Engineer at ColdQuanta United States › Wisconsin › Madison (Posted Feb 18 2022) Salary 130k - 193k Job description ColdQuanta is developing a quantum computing platform utilizing a novel approach with neutral cold atoms. Our quantum computer, Hilbert, arranges individual atoms and generates complex electromagnetic fields to control their quantum state in order to run quantum circuits that our customers will use to discover drugs, optimize the power grid, and develop novel applications for quantum computing. We would like to apply techniques from advanced statistics and machine learning to a variety of problems including analyzing images of our atom array, running black box optimizations to keep our atoms cooled to a few μK, and maintaining gate fidelity by tuning the hundreds of thousands of variables that are used to generate laser and microwave pulses. Our software efforts are largely greenfield, and applicants should be comfortable delivering innovative solutions to novel and challenging problems. An ideal applicant will be able to work independently in exploring huge datasets to identify ways to improve our quantum computer and the software that makes it tick.

Artificial-intelligence tools supported


Zhou Zang is awarded three grants for her work to develop machine-learning models and artificial-intelligence tools to increase agricultural productivity and sustainability. She is an assistant professor of biological systems engineering at the University of Wisconsin-Madison. Zhou Zhang, an assistant professor of biological systems engineering at the University of Wisconsin-Madison, recently was awarded three grants for her work to develop machine-learning models and artificial-intelligence tools to increase agricultural productivity and sustainability. The U.S. Department of Agriculture's National Institute of Food and Agriculture awarded the grants. Project Description: The research team is comprised of Zhang and Matthew Digman, both assistant professors of biological systems engineering, and Paul Mitchell, a professor of agricultural and applied economics – all at UW-Madison.

New machine learning and data science option offers ECE undergrads in-demand skills - College of Engineering - University of Wisconsin-Madison


In the last couple of decades, technology has become very efficient at collecting information from the physical world, including wearable medical sensors, radar systems integrated into automobiles and satellites monitoring earth's climate--as well as from humans by monitoring the decisions they make. But that massive trove of data is mostly useless on its own; sophisticated computer algorithms are needed to find patterns, extract meaning and make predictions from the data. That's why the University of Wisconsin-Madison Department of Electrical and Computer Engineering launched the machine learning and data science option for both undergraduate electrical engineering and computer engineering majors. The option requires 18 elective credits in the 120-hour bachelor's degree consisting of courses focusing on machine learning and data science in engineering. Courses in the option cover coding for data manipulation, analysis, and visualization, and machine learning topics from applied linear algebra and probability through artificial neural networks and deep learning. When students graduate, the option is noted on their transcript, giving them a valuable credential in future employment searches.

Introduction to Deep Learning


I am an Assistant Professor of Statistics at the University of Wisconsin-Madison focusing on deep learning and machine learning research. Among others, I am also contributor to open source software and author of the bestselling book Python Machine Learning.

Large sample spectral analysis of graph-based multi-manifold clustering Machine Learning

In this work we study statistical properties of graph-based algorithms for multi-manifold clustering (MMC). In MMC the goal is to retrieve the multi-manifold structure underlying a given Euclidean data set when this one is assumed to be obtained by sampling a distribution on a union of manifolds $\mathcal{M} = \mathcal{M}_1 \cup\dots \cup \mathcal{M}_N$ that may intersect with each other and that may have different dimensions. We investigate sufficient conditions that similarity graphs on data sets must satisfy in order for their corresponding graph Laplacians to capture the right geometric information to solve the MMC problem. Precisely, we provide high probability error bounds for the spectral approximation of a tensorized Laplacian on $\mathcal{M}$ with a suitable graph Laplacian built from the observations; the recovered tensorized Laplacian contains all geometric information of all the individual underlying manifolds. We provide an example of a family of similarity graphs, which we call annular proximity graphs with angle constraints, satisfying these sufficient conditions. We contrast our family of graphs with other constructions in the literature based on the alignment of tangent planes. Extensive numerical experiments expand the insights that our theory provides on the MMC problem.

Artificial intelligence can accelerate clinical diagnosis of fragile X syndrome


An analysis of electronic health records for 1.7 million Wisconsin patients revealed a variety of health problems newly associated with fragile X syndrome, the most common inherited cause of intellectual disability and autism, and may help identify cases years in advance of the typical clinical diagnosis. Researchers from the Waisman Center at the University of Wisconsin–Madison found that people with fragile X are more likely than the general population to also have diagnoses for a variety of circulatory, digestive, metabolic, respiratory, and genital and urinary disorders. Their study, published recently in the journal Genetics in Medicine, the official journal of the American College of Medical Genetics and Genomics, shows that machine learning algorithms may help identify undiagnosed cases of fragile X syndrome based on diagnoses of other physical and mental impairments. "Machine learning is providing new opportunities to look at huge amounts of data," says lead author Arezoo Movaghar, a postdoctoral fellow at the Waisman Center. "There's no way that we can look at 2 million records and just go through them one by one. We need those tools to help us to learn from what is in the data."

Deep-learning model enables rapid lymphoma detection in PET/CT images


From left to right: Timothy Perk, Alison Roth, Peter Ferjančič, Robert Jeraj, Daniel Huff, Brayden Schott, Ali Deatsch, Victor Santoro Fernandes, Amy Weisman, Vince Streif. Whole-body positron emission tomography combined with computed tomography (PET/CT) is a cornerstone in the management of lymphoma (cancer in the lymphatic system). PET/CT scans are used to diagnose disease and then to monitor how well patients respond to therapy. However, accurately classifying every single lymph node in a scan as healthy or cancerous is a complex and time-consuming process. Because of this, detailed quantitative treatment monitoring is often not feasible in clinical day-to-day practice. Researchers at the University of Wisconsin-Madison have recently developed a deep-learning model that can perform this task automatically.

3D hand-sensing wristband uses a Raspberry Pi for machine learning


Researchers from Cornell and the University of Wisconsin, Madison, have designed a wrist-mounted device that tracks the entire human hand in 3D. The device (pictured) uses the contours from the wearer's wrist to create an abstraction of 20 finger joint positions. The FingerTrak bracelet uses low-resolution thermal cameras that read the wrist contours and a tethered Raspberry Pi 4 and machine learning to teach itself what the hand is doing based on these readings. Cheng Zhang, assistant professor of information science and director of Cornell's new SciFi Lab, where FingerTrak was developed said: "The most novel technical finding in this work is discovering that the contours of the wrist are enough to accurately predict the entire hand posture," Zhang said. "This finding allows the reposition of the sensing system to the wrist, which is more practical for usability."