Collaborating Authors

IBM details research on AI to measure Parkinson's disease progression


IBM says it has made progress toward developing ways to estimate the severity of Parkinson's symptoms by analyzing physical activity as motor impairment increases. In a paper published in the journal Nature Scientific Reports, scientists at IBM Research, Pfizer, the Spivack Center for Clinical and Translational Neuroscience, and Tufts created statistical representations of patients' movement that could be evaluated using AI either in-clinic or from a more natural setting, such as a patient's home. And at the 2020 Machine Learning for Healthcare Conference (MLHC), IBM and the Michael J. Fox Foundation intend to detail a disease progression model that pinpoints how far a person's Parkinson's has advanced. The human motor system relies on a series of discrete movements, like arm swinging while walking, running, or jogging, to perform tasks. These movements and the transitions linking them create patterns of activity that can be measured and analyzed for signs of Parkinson's, a disease that's anticipated to affect nearly 1 million people in the U.S. this year alone.

Securing Amazon Comprehend API calls with AWS PrivateLink


Amazon Comprehend now supports Amazon Virtual Private Cloud (Amazon VPC) endpoints via AWS PrivateLink so you can securely initiate API calls to Amazon Comprehend from within your VPC and avoid using the public internet. Amazon Comprehend is a fully managed natural language processing (NLP) service that uses machine learning (ML) to find meaning and insights in text. You can use Amazon Comprehend to analyze text documents and identify insights such as sentiment, people, brands, places, and topics in text. Using AWS PrivateLink, you can access Amazon Comprehend easily and securely by keeping your network traffic within the AWS network, while significantly simplifying your internal network architecture. It enables you to privately access Amazon Comprehend APIs from your VPC in a scalable manner by using interface VPC endpoints.

Classifying galaxies with artificial intelligence


A research group, consisting of astronomers mainly from the National Astronomical Observatory of Japan (NAOJ), applied a deep-learning technique, a type of AI, to classify galaxies in a large dataset of images obtained with the Subaru Telescope. Thanks to its high sensitivity, as many as 560,000 galaxies have been detected in the images. It would be extremely difficult to visually process this large number of galaxies one by one with human eyes for morphological classification. The AI enabled the team to perform the processing without human intervention. Automated processing techniques for extraction and judgment of features with deep-learning algorithms have been rapidly developed since 2012.

Mass General using AI to analyze lung damage from COVID-19


Radiologists at Mass General are using AI to begin to analyze lung damage data from COVID-19 to predict the best treatment for patients. Nvidia DGX A100 accelerators are helping the task, which involves using X-ray images of lungs to be combined with radiology data from other clinical insights to predict outcomes for COVID patients, according to an Nvidia blog. Mass General Brigham used its own data to build the models. Once validated, they could be deployed in a hospital setting to track patient progress and offer treatment insights. Matthew D. Li, a radiology resident at Mass General and a member of the Martino Center QTIM Lab said there's information in radiologic images not available to doctors as they make treatment plans.

Artificial intelligence hype currently exceeding capability in medicine


Artificial intelligence in medicine is currently in the infancy stage of development, but in 10 to 20 years, the capability of the technology will catch up to the hype, a speaker said at Octane's virtual Ophthalmology Technology Summit. "In the future, ophthalmologists will have to learn about AI, or you'll be vulnerable to ophthalmologists who actually know AI," keynote speaker Anthony Chang, MD, MBA, MPH, MS, chief intelligence and innovation officer at Children's Hospital of Orange County, said at the meeting. The essence of AI in medicine is moving away from evidence-based medicine to achieve precision medicine and population health. A huge information and knowledge gap must be made up by intelligence-based medicine rather than evidence-based medicine, Chang said. "This is important when we think about precision medicine, when we have so many layers of information and data that need to be gathered to make the best decision for each individual patient," he said.

Google updates remote learning tools on Meet and Classroom


It's back-to-school season, and because of the coronavirus pandemic, many students will be hitting the books virtually this year. Consequently, Google for Education has announced a robust set of updates that will enhance Google Meet, Google Classroom and other aspects of the service. The updates were unveiled at Google's The Anywhere School event -- but if you missed the product keynote, here's what you need to know about Google's new tools to facilitate learning in 2020. Google Meet has already seen several updates in the recent months, and updates that will make the app more accessible to teachers and students are still to come. Soon, meetings will not be able to start without a teacher present.

Dimensionality Reduction in Machine Learning


Dimensionality reduction, or dimension reduction, is the transformation of data from a high-dimensional space into a low-dimensional space so that the low-dimensional representation retains some meaningful properties of the original data.

Machine Learning and Signal Processing


Signal processing has given us a bag of tools that have been refined and put to very good use in the last fifty years. There is autocorrelation, convolution, Fourier and wavelet transforms, adaptive filtering via Least Mean Squares (LMS) or Recursive Least Squares (RLS), linear estimators, compressed sensing and gradient descent, to mention a few. Different tools are used to solve different problems, and sometimes, we use a combination of these tools to build a system to process signals. Machine Learning, or the deep neural networks, is much simpler to get used to because the underlying mathematics is fairly straightforward regardless of what network architecture we use. The complexity and the mystery of neural networks lie in the amount of data they process to get the fascinating results we currently have.

Hyun Kim, CEO and Co-Founder, Superb AI – Interview Series


Huyn Kim is the CEO and Co-Founder of Superb AI, a company that provides a new generation machine learning data platform to AI teams so that they can build better AI in less time. The Superb AI Suite is an enterprise SaaS platform built to help ML engineers, product teams, researchers and data annotators create efficient training data workflows. What initially attracted you to the field of AI, Data Science and Robotics? As an undergraduate majoring in Biomedical Engineering at Duke, I was passionate about genetics and how we can engineer our DNA to cure diseases or create genetically engineered organisms. I remember one wet-lab experiment distinctly that kept failing for like 6 months straight. The most frustrating part of it was that there was a lot of repetitive manual work, and in hindsight that was probably the root of some many potential errors.

Primates have evolved larger voice boxes than other mammals to help with social interactions

Daily Mail - Science & tech

Humans and other primates have evolved'significantly larger' voice boxes than other mammals to help with social interactions, a new study shows. Compared with other mammals such as cats, the voice box, or larynx, of primates such as gorillas and chimpanzees is more than a third larger in relation to their body size. They also found that primates' voice boxes undergo faster rates of evolution, and are diverse in function and more variable in size. Researchers made CT-scans of specimens from 55 different species, including primates and other mammals, and produced 3D computer models of their larynges. The research claims to be the first large-scale study into the evolution of the larynx, where tissue vibrations produce sounds for vocal communication.