Goto

Collaborating Authors

 vanderbilt


WEBEYETRACK: Scalable Eye-Tracking for the Browser via On-Device Few-Shot Personalization

Davalos, Eduardo, Zhang, Yike, Srivastava, Namrata, Thatigotla, Yashvitha, Salas, Jorge A., McFadden, Sara, Cho, Sun-Joo, Goodwin, Amanda, TS, Ashwin, Biswas, Gautam

arXiv.org Artificial Intelligence

With advancements in AI, new gaze estimation methods are exceeding state-of-the-art (SOTA) benchmarks, but their real-world application reveals a gap with commercial eye-tracking solutions. Factors like model size, inference time, and privacy often go unaddressed. Meanwhile, webcam-based eye-tracking methods lack sufficient accuracy, in particular due to head movement. To tackle these issues, we introduce We bEyeTrack, a framework that integrates lightweight SOTA gaze estimation models directly in the browser. It incorporates model-based head pose estimation and on-device few-shot learning with as few as nine calibration samples (k < 9). WebEyeTrack adapts to new users, achieving SOTA performance with an error margin of 2.32 cm on GazeCapture and real-time inference speeds of 2.4 milliseconds on an iPhone 14. Our open-source code is available at https://github.com/RedForestAi/WebEyeTrack.


All-in-SAM: from Weak Annotation to Pixel-wise Nuclei Segmentation with Prompt-based Finetuning

Cui, Can, Deng, Ruining, Liu, Quan, Yao, Tianyuan, Bao, Shunxing, Remedios, Lucas W., Tang, Yucheng, Huo, Yuankai

arXiv.org Artificial Intelligence

The Segment Anything Model (SAM) is a recently proposed prompt-based segmentation model in a generic zero-shot segmentation approach. With the zero-shot segmentation capacity, SAM achieved impressive flexibility and precision on various segmentation tasks. However, the current pipeline requires manual prompts during the inference stage, which is still resource intensive for biomedical image segmentation. In this paper, instead of using prompts during the inference stage, we introduce a pipeline that utilizes the SAM, called all-in-SAM, through the entire AI development workflow (from annotation generation to model finetuning) without requiring manual prompts during the inference stage. Specifically, SAM is first employed to generate pixel-level annotations from weak prompts (e.g., points, bounding box). Then, the pixel-level annotations are used to finetune the SAM segmentation model rather than training from scratch. Our experimental results reveal two key findings: 1) the proposed pipeline surpasses the state-of-the-art (SOTA) methods in a nuclei segmentation task on the public Monuseg dataset, and 2) the utilization of weak and few annotations for SAM finetuning achieves competitive performance compared to using strong pixel-wise annotated data.


Global Big Data Conference

#artificialintelligence

When Tony Stark needs to travel to space in the original Iron Man movie, he asks his artificial intelligent (AI) assistant J.A.R.V.I.S. to make a suit that can survive harsh conditions. As AI specialist Kamal Choudhary explains: "The way I see it, what J.A.R.V.I.S. did is, it had a database of materials, scanned the database, found a suitable material, tested it, then synthesized an alloy that could survive space conditions. "That's what we want our system to do, and that's why we called it JARVIS." Choudhary, a researcher at the National Institute of Standards and Technology (NIST), is the founder and developer of JARVIS (Joint Automated Repository for Various Integrated Simulations)--an open dataset designed to automate materials discovery and optimization. Writing in npj Computational Materials in December 2021, Choudhary and Brian DeCost (NIST) described the latest enhancements to JARVIS that apply AI to speed discovery. Combining graph neural networks with chemical and structural knowledge about materials, their Atomistic Line Graph Neural Network (ALIGNN) outperforms previously reported models on atomistic prediction tasks with very high accuracy and better or comparable model training speed. "ALIGNN can predict characteristics in seconds instead of months," Choudhary said. Beyond the inspiration from Iron Man, there was the Materials Genome Initiative. Originated in 2011 under President Obama, the initiative is a multi-federal agency effort to discover, manufacture, and deploy advanced materials twice as fast and at a fraction of the cost of traditional methods. NIST's original contribution to the initiative was the creation of a database of materials and their characteristics, obtained rigorously, using standardized, cutting-edge computing methods. Several such databases have been established, but "what's particular about the JARVIS database is that it contains modules for various kinds of computational approaches," according to David Vanderbilt, professor of physics at Rutgers University, member of the National Academy of Sciences, and a contributor to the project. "There are many different theoretical levels on which you can approach the field.


Is It Really Too Late to Learn New Skills?

The New Yorker

Among the things I have not missed since entering middle age is the sensation of being an absolute beginner. It has been decades since I've sat in a classroom in a gathering cloud of incomprehension (Algebra 2, tenth grade) or sincerely tried, lesson after lesson, to acquire a skill that was clearly not destined to play a large role in my life (modern dance, twelfth grade). Learning to ride a bicycle in my early thirties was an exception--a little mortifying when my husband had to run alongside the bike, as you would with a child--but ultimately rewarding. Less so was the time when a group of Japanese schoolchildren tried to teach me origami at a public event where I was the guest of honor--I'll never forget their sombre puzzlement as my clumsy fingers mutilated yet another paper crane. Like Tom Vanderbilt, a journalist and the author of "Beginners: The Joy and Transformative Power of Lifelong Learning" (Knopf), I learn new facts all the time but new skills seldom.


Discovery of aggressive cancer cell types made possible with machine learning techniques

#artificialintelligence

By applying unsupervised and automated machine learning techniques to the analysis of millions of cancer cells, Rebecca Ihrie and Jonathan Irish, both associate professors of cell and developmental biology, have identified new cancer cell types in brain tumors. Machine learning is a series of computer algorithms that can identify patterns within enormous quantities of data and get'smarter' with more experience. This finding holds the promise of enabling researchers to better understand and target these cell types for research and therapeutics for glioblastoma--an aggressive brain tumor with high mortality--as well as the broader applicability of machine learning to cancer research. With their collaborators, Ihrie and Irish developed Risk Assessment Population IDentification (RAPID), an open-source machine learning algorithm that revealed coordinated patterns of protein expression and modification associated with survival outcomes. The article, "Unsupervised machine learning reveals risk stratifying glioblastoma tumor cells" was published online in the journal eLife on June 23.


Artificial Intelligence Thinks like People with Autism

#artificialintelligence

There's a computer on the third floor of Vanderbilt's Featheringill Hall that scans geometric patterns, deciding which missing shapes would be most likely to fit in. It fills in those blanks about as well as a human 17-year-old would, and it's only getting smarter, thanks to a study of the way certain people on the autism spectrum see the world. Inspired by the writings of perhaps the most famous person on the spectrum, Temple Grandin, Assistant Professor of Computer Science Maithilee Kunda figured out how to write code that emulates the kinds of image-based thinking that Grandin used to design complicated farm equipment. The result is a form of artificial intelligence that allows researchers to study a model of human cognition, determine how it problem-solves and then tweak it to perform better. "Most of us think in a combination of lots of different things. We think in words, we think in pictures, we think in smells and feelings," Kunda said.


2444

AI Magazine

Column n The Educational Advances in Artificial Intelligence column discusses and shares innovative educational approaches that teach or leverage AI and its many subfields at all levels of education (K-12, undergraduate, and graduate levels). In this column I describe my experience adapting the content and infrastructure from massive, open, online courses (MOOCs) to enhance my courses in the Department of Electrical Engineering and Computer Science at Vanderbilt University. I begin with my informal, early use of MOOC content and then move to two deliberatively designed strategies for adapting MOOCs to campus (that is, wrappers and small private online classes [SPOCs]). I describe student reactions and touch on selected policy and institutional considerations. In the never-ending search for increasing student bang-for-the-buck, I was motivated to increase the bang, rather than reduce the buck, the latter being well above my pay grade.


Leveraging AI Teaching in the Cloud for AI Teaching on Campus

Fisher, Douglas H. (Vanderbilt University)

AI Magazine

The Educational Advances in Artificial Intelligence column discusses and shares innovative educational approaches that teach or leverage AI and its many subfields at all levels of education (K-12, undergraduate, and graduate levels). I credit these positive changes to the active in-class learning and a new enthusiasm for teaching, as well as the first-rate lectures by Stanford professors Jennifer Wisdom and Andrew Ng. I was showed that students liked this SPOC format, although pleased when students, enrolled in Introduction to there were suggestions for better in-class and Artificial Intelligence Class MOOC CS188x at the MOOC-content coordination. Had I tweaked my University of California, Berkeley, came to my channel course and continued along this path, I might have for remediation, taking word back to the MOOC's achieved phenominal success, but sadly I left the discussion forum. I required students in my graduate SPOC format behind.