You are free to share this article under the Attribution 4.0 International license. A new data-driven approach is offering insight into people with type 1 diabetes, who account for about 5-10% of all diabetes diagnoses. The researchers gathered information through health informatics and applied artificial intelligence (AI) to better understand the disease. In the study, they analyzed publicly available, real-world data from about 16,000 participants enrolled in the T1D Exchange Clinic Registry. By applying a contrast pattern mining algorithm, researchers were able to identify major differences in health outcomes among people living with type 1 diabetes who do or do not have an immediate family history of the disease.
An interdisciplinary team of researchers from the University of Missouri, Children's Mercy Kansas City, and Texas Children's Hospital has used a new data-driven approach to learn more about persons with Type 1 diabetes, who account for about 5-10% of all diabetes diagnoses. The team gathered its information through health informatics and applied artificial intelligence (AI) to better understand the disease. In the study, the team analyzed publicly available, real-world data from about 16,000 participants enrolled in the T1D Exchange Clinic Registry. By applying a contrast pattern mining algorithm developed at the MU College of Engineering, the team was able to identify major differences in health outcomes among people living with Type 1 diabetes who do or do not have an immediate family history of the disease. Chi-Ren Shyu, the director of the MU Institute for Data Science and Informatics (MUIDSI), led the AI approach used in the study and said the technique is exploratory.
St. Louis, Mo. (Ivanhoe Newswire) -- Artificial intelligence, or AI, allows machines to work more efficiently and solve problems faster. AI is all the buzz in the healthcare industry right now. And now, AI may also help to prevent some diseases. The same technology used in self-driving cars, smart assistants and disease mapping may also help to solve one of health care's biggest problems--how to stave off dementia. "What we're trying to do is intervene at that point when it starts to sharply decline to bring those skills back up," shared Adam Woods, PhD, University of Florida Center for Cognitive Aging and Memory.
For cartographers and cartophiles, Harold Fisk's 1944 maps of the lower Mississippi River are a seminal work. The centerpiece of his report was 15 maps showing the meandering Mississippi and its historical floodplains stretching from Missouri to southern Louisiana. More than seven decades later, Daniel Coe, a cartographer for the Washington Geological Survey, wanted to re-create Fisk's maps with greater accuracy and a new aesthetic. Coe had the advantage of hyperprecise U.S. Geological Survey (USGS) data collected using lidar, a system of laser pulses sent from aircraft to measure topography. The lasers detect the river's shape along with everything around it--every house, tree, and road.
In public proceedings, the Legal Board of Appeal of the EPO confirmed that under the European Patent Convention (EPC), an inventor designated in a patent application must be a human being. This was the judgement in combined cases J 8/20 and J 9/20, where the board just dismissed the applicant's appeal. Here, both the applications were made by a Missouri physicist Stephen Thaler, whose AI-system DABUS had made the inventions. Device for the Autonomous Bootstrapping of Unified Sentience, or DABUS, is a computer system programmed to invent by itself. It is, basically, a swarm of disconnected neutral nets that can continuously generate thought processes and even memories that can, over time, generate new and inventive outputs independently.
What your face looks like is determined almost entirely by the DNA you inherit. This has led to the claim that the millions of anonymised genomes shared for medical research could be linked to specific individuals via photos shared on social media – but the risk is very low, according to Rajagopal Venkatesaramani at Washington University in St Louis, Missouri, and his colleagues. The researchers studied the genomic data and online photos of 126 individuals, then tried to match faces to genomes. They worked backwards from the faces, using AI to analyse the photos and predict gene variants, then looking for genomes with those predicted variants. Given a subset of just 10 individuals, the team was able to identify a quarter of them.
The Tom Hanks' vehicle "Finch" is "Cast Away" revisited. Instead of a young(ish) Hanks stranded on a desert island, we have an old Hanks stranded in a post-apocalyptic world, where a solar flare has destroyed the ozone layer. The UV rays are deadly. The temperature in direct sunlight is 150 degrees, and the empty, dust-covered and windswept streets are littered with desiccated corpses.. Hanks plays the eponymous Finch Weinberg, a tech genius, who lives in St. Louis, Mo., in a warehouse with his real dog Goodyear and his R2D2-like robo-dog Dewy. Finch can only go outside if he wears a spacesuit-like outfit with a space helmet and cooling device attached.
Patent offices and courts around the world are being asked to tackle a similar question: can an artificial intelligence system qualify as an inventor for a patent? A test case making its way through several countries--from Saudi Arabia to Australia to Brazil--has spurred debate about advancements in artificial intelligence technology and questions about whether patent laws need to be revised to recognize machines as inventors. A judge in the U.S. District Court for the Eastern District of Virginia recently ruled that, under current U.S. law, AI can't be listed as an inventor on a patent. The ruling was in line with what U.S., British, and EU patent officials have concluded. The push to recognize AI as an inventor comes from Ryan Abbott, a University of Surrey law professor, and Stephen Thaler, a computer scientist from Missouri.
"This is the first study to address the most common intracranial tumors and to directly determine the tumor class or the absence of tumor from a 3D MRI volume," said Satrajit Chakrabarty, M.S., a doctoral student under the direction of Aristeidis Sotiras, Ph.D., and Daniel Marcus, Ph.D., in Mallinckrodt Institute of Radiology's Computational Imaging Lab at Washington University School of Medicine in St. Louis, Missouri. The six most common intracranial tumor types are high-grade glioma, low-grade glioma, brain metastases, meningioma, pituitary adenoma and acoustic neuroma. Each was documented through histopathology, which requires surgically removing tissue from the site of a suspected cancer and examining it under a microscope. "Non-invasive MRI may be used as a complement, or in some cases, as an alternative to histopathologic examination," he said. To build their machine learning model, called a convolutional neural network, Chakrabarty and researchers from Mallinckrodt Institute of Radiology developed a large, multi-institutional dataset of intracranial 3D MRI scans from four publicly available sources.
Figure shows coarse attention maps generated using GradCAM for correctly classified high-grade glioma (HGG), low-grade glioma (LGG), brain metastases (METS), meningioma (MEN), acoustic neuroma (AN), and pituitary adenoma (PA). For each pair, the postcontrast T1-weighted scan, and the GradCAM attention map (overlaid on scan) have been shown. In GradCAM maps, warmer and colder colors represent high and low contribution of pixels toward a correct prediction, respectively. A team of researchers at Washington University School of Medicine have developed a deep learning model that is capable of classifying a brain tumor as one of six common types using a single 3D MRI scan, according to a study published in Radiology: Artificial Intelligence. "This is the first study to address the most common intracranial tumors and to directly determine the tumor class or the absence of tumor from a 3D MRI volume," said Satrajit Chakrabarty, M.S., a doctoral student under the direction of Aristeidis Sotiras, Ph.D., and Daniel Marcus, Ph.D., in Mallinckrodt Institute of Radiology's Computational Imaging Lab at Washington University School of Medicine in St. Louis, Missouri.