A federal rule that requires health care providers to offer patients free, convenient and secure electronic access to their personal medical records went into effect earlier this year. However, providing patients with access to clinician notes, test results, progress documentation and other records doesn't automatically equip them to understand those records or make appropriate health decisions based on what they read. "Medicalese" can trip up even the most highly educated layperson, and studies have shown that low health literacy is associated with poor health outcomes. University of Notre Dame researcher John Lalor, an assistant professor of information technology, analytics and operations at the Mendoza College of Business, is part of a team working on a web-based natural language processing system that could increase the health literacy of patients who access their records through a patient portal. NoteAid, a project based at the University of Massachusetts Amherst, conveniently translates medical jargon for health care consumers.
Every company may want to put artificial intelligence to work, but most companies aren't blessed with the ability to hire battalions of data scientists–nor is that necessarily the right approach. As Gartner analyst Svetlana Sicular once argued, often the best possible data scientist is the person you already employ who knows your data and simply needs help figuring out how to unlock it. For many business line owners, it's this kind of approach that may make the most sense, as they seek to be smarter with the data they already have. One company working to enable this vision is Cambridge, Massachusetts-based machine learning startup Akkio, which pairs AI with low code in an attempt to democratize AI. I caught up with company co-founder and COO Jon Reilly to learn more.
During the severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) pandemic, polymerase chain reaction (PCR) tests were generally reported only as binary positive or negative outcomes. However, these test results contain a great deal more information than that. As viral load declines exponentially, the PCR cycle threshold (Ct) increases linearly. Hay et al. developed an approach for extracting epidemiological information out of the Ct values obtained from PCR tests used in surveillance for a variety of settings (see the Perspective by Lopman and McQuade). Although there are challenges to relying on single Ct values for individual-level decision-making, even a limited aggregation of data from a population can inform on the trajectory of the pandemic. Therefore, across a population, an increase in aggregated Ct values indicates that a decline in cases is occurring. Science , abh0635, this issue p. [eabh0635]; see also abj4185, p.  ### INTRODUCTION Current approaches to epidemic monitoring rely on case counts, test positivity rates, and reported deaths or hospitalizations. These metrics, however, provide a limited and often biased picture as a result of testing constraints, unrepresentative sampling, and reporting delays. Random cross-sectional virologic surveys can overcome some of these biases by providing snapshots of infection prevalence but currently offer little information on the epidemic trajectory without sampling across multiple time points. ### RATIONALE We develop a new method that uses information inherent in cycle threshold (Ct) values from reverse transcription quantitative polymerase chain reaction (RT-qPCR) tests to robustly estimate the epidemic trajectory from multiple or even a single cross section of positive samples. Ct values are related to viral loads, which depend on the time since infection; Ct values are generally lower when the time between infection and sample collection is short. Despite variation across individuals, samples, and testing platforms, Ct values provide a probabilistic measure of time since infection. We find that the distribution of Ct values across positive specimens at a single time point reflects the epidemic trajectory: A growing epidemic will necessarily have a high proportion of recently infected individuals with high viral loads, whereas a declining epidemic will have more individuals with older infections and thus lower viral loads. Because of these changing proportions, the epidemic trajectory or growth rate should be inferable from the distribution of Ct values collected in a single cross section, and multiple successive cross sections should enable identification of the longer-term incidence curve. Moreover, understanding the relationship between sample viral loads and epidemic dynamics provides additional insights into why viral loads from surveillance testing may appear higher for emerging viruses or variants and lower for outbreaks that are slowing, even absent changes in individual-level viral kinetics. ### RESULTS Using a mathematical model for population-level viral load distributions calibrated to known features of the severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) viral load kinetics, we show that the median and skewness of Ct values in a random sample change over the course of an epidemic. By formalizing this relationship, we demonstrate that Ct values from a single random cross section of virologic testing can estimate the time-varying reproductive number of the virus in a population, which we validate using data collected from comprehensive SARS-CoV-2 testing in long-term care facilities. Using a more flexible approach to modeling infection incidence, we also develop a method that can reliably estimate the epidemic trajectory in even more-complex populations, where interventions may be implemented and relaxed over time. This method performed well in estimating the epidemic trajectory in the state of Massachusetts using routine hospital admissions RT-qPCR testing data—accurately replicating estimates from other sources for the entire state. ### CONCLUSION This work provides a new method for estimating the epidemic growth rate and a framework for robust epidemic monitoring using RT-qPCR Ct values that are often simply discarded. By deploying single or repeated (but small) random surveillance samples and making the best use of the semiquantitative testing data, we can estimate epidemic trajectories in real time and avoid biases arising from nonrandom samples or changes in testing practices over time. Understanding the relationship between population-level viral loads and the state of an epidemic reveals important implications and opportunities for interpreting virologic surveillance data. It also highlights the need for such surveillance, as these results show how to use it most informatively. ![Figure] Ct values reflect the epidemic trajectory and can be used to estimate incidence. ( A and B ) Whether an epidemic has rising or falling incidence will be reflected in the distribution of times since infection (A), which in turn affects the distribution of Ct values in a surveillance sample (B). ( C ) These values can be used to assess whether the epidemic is rising or falling and estimate the incidence curve. Estimating an epidemic’s trajectory is crucial for developing public health responses to infectious diseases, but case data used for such estimation are confounded by variable testing practices. We show that the population distribution of viral loads observed under random or symptom-based surveillance—in the form of cycle threshold (Ct) values obtained from reverse transcription quantitative polymerase chain reaction testing—changes during an epidemic. Thus, Ct values from even limited numbers of random samples can provide improved estimates of an epidemic’s trajectory. Combining data from multiple such samples improves the precision and robustness of this estimation. We apply our methods to Ct values from surveillance conducted during the severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) pandemic in a variety of settings and offer alternative approaches for real-time estimates of epidemic trajectories for outbreak management and response. : /lookup/doi/10.1126/science.abh0635 : /lookup/doi/10.1126/science.abj4185 : pending:yes
Can artificial intelligence be employed to understand the context of computer code and write its own? There have been impressive strides being made in this direction, promising to make the work of developers -- and non-developers working with low-code/no-code platforms -- more productive, and more focused on the business at hand. Last year, Intel, in conjunction with Massachusetts Institute of Technology and Georgia Institute of Technology, announced the creation of an automated engine designed to learn what a piece of software intends to do by studying the structure of the code and analyzing syntactic differences of other code with similar behavior. The goal of the effort "is to democratize the creation of software," said Justin Gottschlich, principal scientist at Intel. "When fully realized, machine programming will enable everyone to create software by expressing their intention in whatever fashion that's best for them, whether that's code, natural language or something else." OpenAI GPT-3 (Generative Pre-trained Transformer) also can be employed to automatically generate computer code.
It has been said that robotics is the most challenging area of machine learning: even simple things, such as moving a robotic arm a small distance, are an incredibly complex engineering challenge. You can imagine, then, that it's a big feat to apply machine learning to make a robotic arm help a human put on their jacket. Researchers at the Massachusetts Institute of Technology on Monday published the details of a study in which they demonstrated a robot arm helping a human, and offered details as to why the procedure they claim is provably safe for people. In the demonstration, a robotic arm has a vest in its grip, with the human's right arm through the armhole which it then slowly tugs upward to the shoulder. A video of the demo posted on YouTube compares how much faster the arm is versus a traditionally engineered approach.
Investments in housing, transit and job training emerged as top priorities for the Bay State as it recovers from the pandemic, according to a future of work report released by the Baker Administration. "The changing ways of working may shift what we think of as the'center of gravity' here in Massachusetts away from the urban core and toward the rest of the state," Baker said at a Tuesday morning press conference at the Tufts Launchpad location for BioLabs in Boston, a recipient of a Baker administration Workforce Training Fund Program grant. The report estimates that Massachusetts will need to produce 125,000 to 200,000 housing units by 2030, a $1 billion investment, with a focus on aiding homeownership among communities of color. Baker also announced $240 million in funding for workforce training programs. The report says up to 400,000 people may need to change occupations over the next decade to keep up with workplace trends.
The resurgence of artificial intelligence (AI) is largely due to advances in pattern-recognition due to deep learning, a form of machine learning that does not require explicit hard-coding. The architecture of deep neural networks is somewhat inspired by the biological brain and neuroscience. Like the biological brain, the inner workings of exactly why deep networks work are largely unexplained, and there is no single unifying theory. Recently researchers at the Massachusetts Institute of Technology (MIT) revealed new insights about how deep learning networks work to help further demystify the black box of AI machine learning. The MIT research trio of Tomaso Poggio, Andrzej Banburski, and Quianli Liao at the Center for Brains, Minds, and Machines developed a new theory as to why deep networks work and published their study published on June 9, 2020 in PNAS (Proceedings of the National Academy of Sciences of the United States of America).
As rescue teams continue to search for survivors amid the rubble of the collapsed condo in Surfside, Florida, they have increasingly high-tech tools at their disposal. Massachusetts-based robotics company Teledyne Flir sent the Miami-Dade Fire Department the Flir FirstLook, a rugged but lightweight drone that'investigates dangerous and hazardous material while keeping its operator out of harm's way.' Unlike human responders, FirstLook doesn't have to worry about smoke inhalation, can reach into cramped areas, and won't risk destabilizing the structure further. 'In a collapse situation like this, the pile is structurally unsound and constantly vulnerable to shifting,' Teledyne Flir vice president Tom Frost told The Washington Post. 'It's much safer to have a robot crawl deeper into a void than to have a person crawling into that void,' Frost said. About the size of a brick, FirstLook can even be thrown from a distance--if it lands upside down, it has the capability to right itself.
The UCI Machine Learning Repository is a collection of databases, domain theories, and data generators that are used by the machine learning community for the empirical analysis of machine learning algorithms. The archive was created as an ftp archive in 1987 by David Aha and fellow graduate students at UC Irvine. Since that time, it has been widely used by students, educators, and researchers all over the world as a primary source of machine learning data sets. As an indication of the impact of the archive, it has been cited over 1000 times, making it one of the top 100 most cited "papers" in all of computer science. The current version of the web site was designed in 2007 by Arthur Asuncion and David Newman, and this project is in collaboration with Rexa.info at the University of Massachusetts Amherst.
We all have a craving for chocolate now and again, but not usually when we first wake up. However, a new study has claimed that eating the sugary snack for breakfast could actually have'unexpected benefits' by helping your body burn fat. Researchers in Boston, Massachusetts gave 100 grams of milk chocolate to 19 post-menopausal women within one hour after waking up and one hour before bedtime. Starting the day with chocolate could actually help your body burn fat, scientists at Brigham and Women's Hospital in Boston say That is about the equivalent of two standard-sized Mars bars (58g) – although the researchers used standard milk chocolate containing 18.1g of cocoa. Amazingly, the team discovered that neither morning or night time milk chocolate intake led to weight gain, likely because it acted as an appetite suppressant.