Goto

Collaborating Authors

Results


Use of AI in Gastroenterology Can Move Beyond "Cool Tools" to Improve Practice Efficiency

#artificialintelligence

As artificial intelligence (AI) technology in the gastrointestinal field continues to advance, speakers at Digestive Disease Week 2022 discussed …


MIT, Harvard scientists find AI can recognize race from X-rays -- and nobody knows how - The Boston Globe

#artificialintelligence

A doctor can't tell if somebody is Black, Asian, or white, just by looking at their X-rays. The study found that an artificial intelligence program trained to read X-rays and CT scans could predict a person's race with 90 percent accuracy. But the scientists who conducted the study say they have no idea how the computer figures it out. "When my graduate students showed me some of the results that were in this paper, I actually thought it must be a mistake," said Marzyeh Ghassemi, an MIT assistant professor of electrical engineering and computer science, and coauthor of the paper, which was published Wednesday in the medical journal The Lancet Digital Health. "I honestly thought my students were crazy when they told me."


AI transformer models touted to help design new drugs

#artificialintelligence

Special report AI can study chemical molecules in ways scientists can't comprehend, automatically predicting complex protein structures and designing new drugs, despite having no real understanding of science. The power to design new drugs at scale is no longer limited to Big Pharma. Startups armed with the right algorithms, data, and compute can invent tens of thousands of molecules in just a few hours. New machine learning architectures, including transformers, are automating parts of the design process, helping scientists develop new drugs for difficult diseases like Alzheimer's, cancer, or rare genetic conditions. In 2017, researchers at Google came up with a method to build increasingly bigger and more powerful neural networks.


La veille de la cybersécurité

#artificialintelligence

MIT and Mass General Brigham researchers and physicians connect in person to bring AI into mainstream health care. Even as rapid improvements in artificial intelligence have led to speculation over significant changes in the health care landscape, the adoption of AI in health care has been minimal. A 2020 survey by Brookings, for example, found that less than 1 percent of job postings in health care required AI-related skills. The Abdul Latif Jameel Clinic for Machine Learning in Health (Jameel Clinic), a research center within the MIT Schwarzman College of Computing, recently hosted the MITxMGB AI Cures Conference in an effort to accelerate the adoption of clinical AI tools by creating new opportunities for collaboration between researchers and physicians focused on improving care for diverse patient populations. Once virtual, the AI Cures Conference returned to in-person attendance at MIT's Samberg Conference Center on the morning of April 25, welcoming over 300 attendees primarily made up of researchers and physicians from MIT and Mass General Brigham (MGB).


Traditional vs Deep Learning Algorithms in the Telecom Industry -- Cloud Architecture and Algorithm Categorization

#artificialintelligence

The unprecedented growth of mobile devices, applications and services have placed the utmost demand on mobile and wireless networking infrastructure. Rapid research and development of 5G systems have found ways to support mobile traffic volumes, real-time extraction of fine-grained analytics, and agile management of network resources, so as to maximize user experience. Moreover inference from heterogeneous mobile data from distributed devices experiences challenges due to computational and battery power limitations. ML models employed at the edge-servers are constrained to light-weight to boost model performance by achieving a trade-off between model complexity and accuracy. Also, model compression, pruning, and quantization are largely in place.


Introduction to Artificial Intelligence for Beginners - Analytics Vidhya

#artificialintelligence

We have come a long way in the field of Machine Learning / Deep learning that we are now very much interested in AI (Artificial Intelligence), in this article we are going to introduce you to AI. The short and precise answer to Artificial Intelligence depends on the person you are explaining it to. A normal human with little understanding of this technology will relate this with "robots". They will say that AI is a terminator like-object that can react and can think on its own. If you ask this same question to an AI expert, he will say that "it is a set of patterns and algorithms that can generate solutions to everything without being explicitly instructed to do that work".


Apocalypse now? What quantum computing can learn from AI

#artificialintelligence

A few years ago, many people imagined a world run by robots. The promises and challenges associated with artificial intelligence (AI) were widely discussed as this technology moved out of the labs and into the mainstream. Many of these predictions seemed contradictory. Robots were mooted to steal our jobs, but also create millions of new ones. As more applications were rolled out, AI hit the headlines for all the right (and wrong) reasons, promising everything from revolutionizing the healthcare sector to making light of the weight of data now created in our digitized world.


Resolution of the Burrows-Wheeler Transform Conjecture

Communications of the ACM

The Burrows-Wheeler Transform (BWT) is an invertible text transformation that permutes symbols of a text according to the lexicographical order of its suffixes. BWT is the main component of popular lossless compression programs (such as bzip2) as well as recent powerful compressed indexes (such as the r-index7), central in modern bioinformatics. The compressibility of BWT is quantified by the number r of equal-letter runs in the output. Despite the practical significance of BWT, no nontrivial upper bound on r is known. By contrast, the sizes of nearly all other known compression methods have been shown to be either always within a poly-log n factor (where n is the length of the text) from z, the size of Lempel–Ziv (LZ77) parsing of the text, or much larger in the worst case (by an nε factor for ε 0). In this paper, we show that r (z log2 n) holds for every text. This result has numerous implications for text indexing and data compression; in particular: (1) it proves that many results related to BWT automatically apply to methods based on LZ77, for example, it is possible to obtain functionality of the suffix tree in (z polylog n) space; (2) it shows that many text processing tasks can be solved in the optimal time assuming the text is compressible using LZ77 by a sufficiently large polylog n factor; and (3) it implies the first nontrivial relation between the number of runs in the BWT of the text and of its reverse. In addition, we provide an (z polylog n)-time algorithm converting the LZ77 parsing into the run-length compressed BWT. To achieve this, we develop several new data structures and techniques of independent interest. In particular, we define compressed string synchronizing sets (generalizing the recently introduced powerful technique of string synchronizing sets11) and show how to efficiently construct them. Next, we propose a new variant of wavelet trees for sequences of long strings, establish a nontrivial bound on their size, and describe efficient construction algorithms. Finally, we develop new indexes that can be constructed directly from the LZ77 parsing and efficiently support pattern matching queries on text substrings. Lossless data compression aims to exploit redundancy in the input data to represent it in a small space.


SoundWatch

Communications of the ACM

We present SoundWatch, a smartwatch-based deep learning application to sense, classify, and provide feedback about sounds occurring in the environment.


Challenges, Experiments, and Computational Solutions in Peer Review

Communications of the ACM

While researchers are trained to do research, there is little training for peer review. Several initiatives and experiments have looked to address this challenge. Recently, the ICML 2020 conference adopted a method to select and then mentor junior reviewers, who would not have been asked to review otherwise, with a motivation of expanding the reviewer pool to address the large volume of submissions.43 An analysis of their reviews revealed that the junior reviewers were more engaged through various stages of the process as compared to conventional reviewers. Moreover, the conference asked meta reviewers to rate all reviews, and 30% of reviews written by junior reviewers received the highest rating by meta reviewers, in contrast to 14% for the main pool. Training reviewers at the beginning of their careers is a good start but may not be enough. There is some evidence8 that quality of an individual's review falls over time, at a slow but steady rate, possibly because of increasing time constraints or in reaction to poor-quality reviews they themselves receive. While researchers are trained to do research, there is little training for peer review … Training reviewers at the beginning of their careers is a good start but may not be enough.