Goto

Collaborating Authors

 heinrich


PrIINeR: Towards Prior-Informed Implicit Neural Representations for Accelerated MRI

Hemidi, Ziad Al-Haj, Kats, Eytan, Heinrich, Mattias P.

arXiv.org Artificial Intelligence

Accelerating Magnetic Resonance Imaging (MRI) reduces scan time but often degrades image quality. While Implicit Neural Representations (INRs) show promise for MRI reconstruction, they struggle at high acceleration factors due to weak prior constraints, leading to structural loss and aliasing artefacts. To address this, we propose PrIINeR, an INR-based MRI reconstruction method that integrates prior knowledge from pre-trained deep learning models into the INR framework. By combining population-level knowledge with instance-based optimization and enforcing dual data consistency, PrIINeR aligns both with the acquired k-space data and the prior-informed reconstruction. Evaluated on the NYU fastMRI dataset, our method not only outperforms state-of-the-art INR-based approaches but also improves upon several learning-based state-of-the-art methods, significantly improving structural preservation and fidelity while effectively removing aliasing artefacts.PrIINeR bridges deep learning and INR-based techniques, offering a more reliable solution for high-quality, accelerated MRI reconstruction. The code is publicly available on https://github.com/multimodallearning/PrIINeR.


The Best Tool to Protect Your Home From Disaster Might Be in Your Pocket

Slate

Sign up for the Slatest to get the most insightful analysis, criticism, and advice out there, delivered to your inbox daily. Chris Heinrich will never forget the winter day he and his family evacuated their home in Altadena, California, as a vertical wall of flame was slowly bearing down on their neighborhood from the mountains. "It was dark," he told Slate. "There was no internet, my daughter was crying, the wind was blowing." Even as the fires approached, he said, he didn't really believe that their house would burn.


Communicating Likelihoods with Normalising Flows

Araz, Jack Y., Beck, Anja, Reboud, Méril, Spannowsky, Michael, van Dyk, Danny

arXiv.org Artificial Intelligence

We present a machine-learning-based workflow to model an unbinned likelihood from its samples. A key advancement over existing approaches is the validation of the learned likelihood using rigorous statistical tests of the joint distribution, such as the Kolmogorov-Smirnov test of the joint distribution. Our method enables the reliable communication of experimental and phenomenological likelihoods for subsequent analyses. We demonstrate its effectiveness through three case studies in high-energy physics. To support broader adoption, we provide an open-source reference implementation, nabu.


AI for everybody: GOP, Dems unite behind public AI research center to 'democratize' the tech

FOX News

Fox News correspondent Gillian Turner has the latest on the president's focus amid calls for an impeachment inquiry on'Special Report.' Republicans and Democrats in the Artificial Intelligence Caucus are proposing the creation of a public research center that will give people and organizations access to the tools they need to create their own AI systems, even if they don't have access to billions of dollars in research funding. Lawmakers proposed the "Creating Resources for Every American To Experiment with Artificial Intelligence Act," or the CREATE AI Act, a bill that would establish the National Artificial Intelligence Research Resource (NAIRR). In January, a federal task force called for the creation of this body and estimated it would need about $440 million per year to get off the ground. The CREATE AI Act doesn't authorize that specific level of funding, but the bill signals that both parties are interested in establishing the NAIRR in order to ensure entities other than the billion- and trillion-dollar AI developers aren't the only ones developing this new technology.


Responsibility assignment won't solve the moral issues of artificial intelligence

#artificialintelligence

Overview: The multitude of AI ethics guidelines published over the last 10 years take for granted certain buzzwords, such as'responsibility', without due inspection into their philosophical foundations and whether they are actually fit for purpose. This paper offers a challenge to the notion that'responsibility' is suitable and sufficient to AI ethics work. We have all seen the AI ethics buzzwords by now: 'explainability', 'transparency', and the big one, 'responsibility'. But what do these buzzwords offer in practice? This paper challenges the notion that AI ethicists can gain anything meaningful from employing the catch-all term'responsibility'. Responsibility is disassembled into differentiated parts, 'accountability', 'liability', and'praise and blameworthiness', which each offer unique insights into the ethical challenges AI poses.


Heinrich

AAAI Conferences

Self-play Monte Carlo Tree Search (MCTS) has been successful in many perfect-information two-player games. Although these methods have been extended to imperfect-information games, so far they have not achieved the same level of practical success or theoretical convergence guarantees as competing methods. In this paper we introduce Smooth UCT, a variant of the established Upper Confidence Bounds Applied to Trees (UCT) algorithm. Smooth UCT agents mix in their average policy during self-play and the resulting planning process resembles game-theoretic fictitious play. When applied to Kuhn and Leduc poker, Smooth UCT approached a Nash equilibrium, whereas UCT diverged. In addition, Smooth UCT outperformed UCT in Limit Texas Hold'em and won 3 silver medals in the 2014 Annual Computer Poker Competition.


Congress Seeks Creation of National Research Cloud for Artificial Intelligence

#artificialintelligence

A bipartisan cadre of tech-focused legislators in the House and Senate have introduced legislation that would direct the federal government to develop a national cloud computing infrastructure for artificial intelligence research. Introduced by Sens. Rob Portman, R-Ohio, and Martin Heinrich, D-N.M., Thursday, the National Cloud Computing Task Force Act would convene a mix of technical experts across academic, industry and government. The group would develop a nuanced roadmap for how the nation should build, deploy, govern and sustain a national research cloud for AI. "With China focused on toppling the United States' leadership in AI, we need to redouble our efforts with a sustained commitment to the best and brightest by developing a national research cloud to ensure our technical researchers get the tools they need to succeed," Portman said in a statement. "By democratizing access to computing power we ensure that any American with computer science talent can pursue their good ideas."


Government Should Address Potential Bias in Artificial Intelligence, Lawmakers Say

#artificialintelligence

Bias in artificial intelligence could critically impact the deployment, adoption and evolution of the technology, Democratic lawmakers said in Washington Wednesday. They also detailed their plans to combat the issue and help America maintain its position as a global leader in AI. "We have a real concern about bias in data--is that data bias in some historical way or in some intentional way?" Rep. Jerry McNerney, D-Calif., told Politico's Steven Overly at an event held by the publication and technology corporation, Intel. "And we want to make sure the data doesn't harm groups of people or sectors of the country." McNerney, who holds a doctorate in mathematics, elaborated on how algorithm developers implicitly assume that whatever they produce will be logical. Often, it isn't until they look back at the results that they realize they've inserted bias through actions like failing to make important considerations during the process or using data that excluded certain groups of people.


State leaders discuss artificial intelligence developments

#artificialintelligence

ALBUQUERQUE, N.M. (KRQE) - New Mexico has a long legacy of high tech projects coming out of our national labs. Now, tech experts from around the state are brainstorming ways to keep New Mexico at the forefront of developing the next wave of technological advances. Leaders say New Mexico has the potential to lead the charge in artificial intelligence--from military defense to health care and agriculture. "A-I is going to touch every portion of our lives. It's going to affect the kinds of foods we buy, what we put into our body, how medicine works, how we find information, how we educate our children," says Mark Johnson.


U.S. Senators propose legislation to fund national AI strategy

#artificialintelligence

U.S. Senators Rob Portman (R-OH), Martin Heinrich (D-NM), and Brian Schatz (D-HI) today proposed the Artificial Intelligence Initiative Act, legislation to pump $2.2 billion into federal research and development and create a national AI strategy. The $2.2 billion would be doled out over the course of the next 5 years to federal agencies like the Department of Energy, Department of Commerce's National Institute of Standards and Technology (NIST), and others. The legislation would establish a National AI Coordination Office to lead federal AI efforts, require the National Science Foundation (NSF) to study the effects of AI on society and education, and allocate $40 million a year to NIST to create AI evaluation standards. The bill would also include $20 million a year from 2020-2024 to fund the creation of 5 multidisciplinary AI research centers, with one focused solely on K-12 education. Plans to open national AI centers in the bill closely resembles plans from the 20-year AI research program proposed by the Computing Consortium.