MIT researchers have concluded that the well-known ImageNet data set has "systematic annotation issues" and is misaligned with ground truth or direct observation when used as a benchmark data set. "Our analysis pinpoints how a noisy data collection pipeline can lead to a systematic misalignment between the resulting benchmark and the real-world task it serves as a proxy for," the researchers write in a paper titled "From ImageNet to Image Classification: Contextualizing Progress on Benchmarks." "We believe that developing annotation pipelines that better capture the ground truth while remaining scalable is an important avenue for future research." When the Stanford University Vision Lab introduced ImageNet at the Conference on Computer Vision and Pattern Recognition (CVPR) in 2009, it was much larger than many previously existing image data sets. The ImageNet data set contains millions of photos and was assembled over the span of more than two years. ImageNet uses the WordNet hierarchy for data labels and is widely used as a benchmark for object recognition models.
Farmland bird species are declining over most of Europe. Birds breeding on the ground are particularly vulnerable because they are exposed to mechanical operations, like plowing and sowing, which take place in spring and often accidentally destroy nests. Researchers flew a drone carrying a thermal camera over agricultural fields to record images. These were then fed to an artificial intelligence algorithm capable of accurately identifying nests, a first step to aid their protection. Researchers tested the system in Southern Finland near University of Helsinki's Lammi Biological Station, using wild nests with eggs of the Lapwing Vanellus vanellus.
Machine learning was found to be superior to logistic risk scores in predicting intrahospital all-cause mortality after transcatheter aortic valve implantation (TAVI), according to study results published in Clinical Research in Cardiology. Current strategies for identifying patients eligible for TAVI rely on risk assessment tools such as the Society of Thoracic Surgeon's Risk Score (STS score). The predictive power of these tools is poor, and improved options for risk stratification of TAVI patients are needed. In this retrospective analysis of data from 451 patients, investigators aimed to evaluate whether machine learning models could be used to predict clinical outcomes for patients after TAVI. A total of 83 features, including patient demographics, comorbidities, laboratory data, electro- and echocardiogram findings, and computed tomography (CT) results, were used to train and test the predictive models.
Two people looking at the exact same scene before them may perceive it differently as a result of a so-called'fingerprint of misperception'. Researchers at the University of California Berkeley found natural variation in the inherent visual ability to pinpoint the exact location and size of objects. A series of experiments on nine individuals found'dramatic differences' in the ability to resolve fine details as well as discrepancies in judging location and size. The differences are due to how the brain processes visual stimuli, the academics believe, but the exact neural network responsible for the variation remains unknown. 'We assume our perception is a perfect reflection of the physical world around us, but this study shows that each of us has a unique visual fingerprint,' study lead author Miss Zixuan Wang, a UC Berkeley doctoral student in psychology, told Berkeley News.
AI techniques are being applied by researchers aiming to extend the life and monitor the health of batteries, with the aim of powering the next generation of electric vehicles and consumer electronics. Researchers at Cambridge and Newcastle Universities have designed a machine learning method that can predict battery health with ten times the accuracy of the current industry standard, according to an account in ScienceDaily. The promise is to develop safer and more reliable batteries. In a new way to monitor batteries, the researchers sent electrical pulses into them and monitored the response. The measurements were then processed by a machine learning algorithm to enable a prediction of the battery's health and useful life.
Akbar Solo Researchers in Moscow and America have discovered how to use machine learning to grow artificial organs, especially to tackle blindness Researchers from the Moscow Institute of Physics and Technology, Ivannikov Institute for System Programming, and the Harvard Medical School-affiliated Schepens Eye Research Institute have developed a neural network capable of recognizing retinal tissues during the process of their differentiation in a dish. Unlike humans, the algorithm achieves this without the need to modify cells, making the method suitable for growing retinal tissue for developing cell replacement therapies to treat blindness and conducting research into new drugs. The study was published in Frontiers in Cellular Neuroscience. How would this enable easier organ growth? This would allow to expand the applications of the technology for multiple fields including the drug discovery and development of cell replacement therapies to treat blindnessIn multicellular organisms, the cells making up different organs and tissues are not the same.
The term'covidiot' is a coronavirus-era slang term for someone who ignores recommendations to limit the spread of the deadly disease – and a new study reveals what makes these people dismiss the warnings. Researchers found that whether or not an individual decides to follow social distancing depends on how much information their working memory can store, which determines mental abilities such as intelligence. Following a survey of 850 Americans, the team discovered that those with more working memory capacity were more likely to comply with recommendations during the early stage of the outbreak. The findings suggest that policy makers need promote compliance behaviors, such as wearing a mask, based on individuals' general cognitive abilities to avoid effortful decisions. The coronavirus began spread across the US earlier this year and when it gained more traction, the Centers for Disease Control and Prevention (CDC) released a list of recommendations aimed at limiting the spread of the virus.
Testing for pathogens is a critical component of maintaining public health and safety. Having a method to rapidly and reliably test for harmful germs is essential for diagnosing diseases, maintaining clean drinking water, regulating food safety, conducting scientific research, and other important functions of modern society. In recent research, scientists from University of California, Los Angeles (UCLA), have demonstrated that artificial intelligence (AI) can detect harmful bacteria from a water sample up to 12 hours faster than the current gold-standard Environmental Protection Agency (EPA) methods. In a new study published yesterday in Light: Science and Applications, the researchers created a time-lapse imaging platform that uses two separate deep neural networks (DNNs) for the detection and classification of bacteria. The team tested the high-throughput bacterial colony growth detection and classification system using water suspensions with added coliform bacteria of E. coli (including chlorine-stressed E. coli), K. pneumoniae and K. aerogenes, grown on chromogenic agar as the culture medium.
Dr. Tom Inglesby, director of the Center for Health Security at Johns Hopkins University, joins Chris Wallace on'Fox News Sunday.' A new study published in the Proceedings of the National Academy of Sciences claims compliance in America with social distancing during the early stages of the coronavirus pandemic is linked to working memory. The study, "Working memory capacity predicts individual differences in social-distancing compliance during the COVID-19 pandemic in the United States," assessed the working memory, personality, mood and fluid intelligence of test subjects; the researchers surveyed 850 U.S. residents between March 13 and March 25. The study found a link between working memory and social distancing, and subjects -- noting more benefits than costs -- with higher levels of fluid intelligence, fairness and agreeableness followed the new rules of social distancing compliance, the study found. "The decision of whether or not to follow social distancing guidelines is a difficult one, especially when there is a conflict between the societal benefits (e.g., prevent straining public health resources) and personal costs (e.g., loss in social connection and financial challenges). This decision critically relies on our mental capacity in retaining multiple pieces of potentially conflicting information in our head, which is referred to as working memory capacity," study author Weizhen Xie (Zane) told PsyPost.
MIT has designed a robot that is capable of disinfecting the floor of a 4,000-square foot warehouse in only half an hour, and it could one day be used to clean your local grocery store or school. The university's Computer Science and Artificial Intelligence Laboratory (CSAIL) worked with Ava Robotics -- a company that focuses on creating telepresence robots -- and the Greater Boston Food Bank (GBFB) to develop a robot that uses a custom UV-C light to disinfect surfaces and neutralize aerosolized forms of the coronavirus. Development on this project began in early April, and one of the researchers said that it came in direct response to the pandemic. The results have been encouraging enough that the researchers say that autonomous UV disinfection could be done in other environments such as supermarkets, factories and restaurants. Covid-19 mainly spreads via airborne transmission, and it is capable of remaining on surfaces for several days.