According to a new study by Ben-Gurion University of the Negev (BGU) researchers published in Scientific Reports, a one-time, hour-long session with a plush, seal-like social robot reduced pain and oxytocin levels, and increased happiness. The Japanese social robot, PARO, emits seal-like sounds and moves its head and flippers in response to being spoken to and touched. Human-to-human contact has been found to bolster mood and reduce pain in previous studies. Dr. Shelly Levy-Tzedek of the BGU Department of Physical Therapy and her team investigated whether a furry social robot could induce similar effects when normal human-to-human contact is not available. Levy-Tzedek and her team discovered that a single, 60-minute interaction with PARO actually improved mood as well as reduced mild or severe pain.
There is nothing quite like a hug or a touch to brighten one's mood. In fact, human-to-human contact has shown signs of increased positive thinking, expands trust, reduces social anxiety and stress and much more. But human-to-human contact is scarce during these corona times, leaving many people sad and lonely. The PARO robot is an advanced robot developed by AIST, a leading Japanese industrial automation pioneer. It allows the documented benefits of animal therapy to be administered to patients in environments such as hospitals and extended care facilities where life animals present treatment or logistical difficulties.
A breakthrough that led to the creation of new neurons in mice could be used to transplant brain cells in Parkinson's patients and cure them of the disease. University of California San Diego School of Medicine researchers created neurons in mice using a new, much simpler method that involved rewriting genes. Parkinson's disease is characterised by a loss of dopaminergic neurons in a region of the brain responsible for reward and movement - replacing those cells could help to reduce or even reverse the symptoms of the degenerative disease. A small study involving mice with Parkinson's saw those given the'new neuron treatment' return to normal within three months and stay disease free for life. The researchers said it could one day be used to'cure' any disease caused by the loss of neurons but warned this was a long way off and hadn't been tested. Left: mouse cells (green) before reprogramming and then right shows neurons (red) induced from mouse cells after reprogramming.
Kaia Health, a digital therapeutics startup which uses computer vision technology for real-time posture tracking via the smartphone camera to deliver human-hands-free physiotherapy, has closed a $26 million Series B funding round. The funding was led by Optum Ventures, Idinvest and capital300 with participation from existing investors Balderton Capital and Heartcore Capital, in addition to Symphony Ventures -- the latter in an "investment partnership" with world famous golfer, Rory McIlroy, who knows a thing or two about chronic pain. Back in January 2019, when Kaia announced a $10M Series A, its business ratio was split 80:20 Europe to US. Now, says co-founder and CEO Konstantin Mehl -- speaking to TechCrunch by Zoom chat from New York where he's recently relocated -- it's flipped the other way. Part of the new funding will thus go on building out its commercial team in the US -- now its main market.
Amazon today announced the general availability of Multi-Capability Skills for Alexa, a way to combine smart home and custom Alexa apps into single, unified voice apps. Starting this week, developers can publish and maintain an Alexa app that enables both internet of things and third-party features for their devices, extending built-in smart home commands with custom voice interaction models to support nearly any feature without forcing customers to enable and invoke two separate apps. Before the advent of Multi-Capability Skills, Alexa developers had to publish and maintain multiple apps to enable custom features: a smart home app to leverage built-in smart home capabilities and a custom app to support capabilities not included in the Alexa smart home API. Now, they don't -- and customers don't have to remember two different app names. In this way, Multi-Capability Skills make it easier for developers to create better Alexa experiences.
To develop an automated model for staging knee osteoarthritis severity from radiographs and to compare its performance to that of musculoskeletal radiologists. Radiographs from the Osteoarthritis Initiative staged by a radiologist committee using the Kellgren-Lawrence (KL) system were used. Before using the images as input to a convolutional neural network model, they were standardized and augmented automatically. The model was trained with 32 116 images, tuned with 4074 images, evaluated with a 4090-image test set, and compared to two individual radiologists using a 50-image test subset. Saliency maps were generated to reveal features used by the model to determine KL grades.
A new research collaboration between researchers at the University of Alberta and the University of Glasgow is exploring whether interaction with an AI-enhanced, socially intelligent robot can effectively distract children during painful clinical procedures, reducing their pain and distress. "Pain is much more than just a physical response; we also want to manage a child's stress, anxiety and distress," said U of A medical researcher and pediatric emergency physician Samina Ali. "We want to know if integrating a robot into the clinical setting can create a more positive, meaningful and less traumatic experience for children and their families." The three-year project builds on a series of smaller studies, supported by funding from the Stollery Children's Hospital Foundation, that used programmable humanoid robots named MEDi to deliver cognitive behavioural therapy-based interventions to children as they went through procedures involving needles. In those studies, the MEDi robot was remotely operated and followed a limited script.
The severity of knee osteoarthritis is graded using the 5-point Kellgren-Lawrence (KL) scale where healthy knees are assigned grade 0, and the subsequent grades 1-4 represent increasing severity of the affliction. Although several methods have been proposed in recent years to develop models that can automatically predict the KL grade from a given radiograph, most models have been developed and evaluated on datasets not sourced from India. These models fail to perform well on the radiographs of Indian patients. In this paper, we propose a novel method using convolutional neural networks to automatically grade knee radiographs on the KL scale. Our method works in two connected stages: in the first stage, an object detection model segments individual knees from the rest of the image; in the second stage, a regression model automatically grades each knee separately on the KL scale. We train our model using the publicly available Osteoarthritis Initiative (OAI) dataset and demonstrate that fine-tuning the model before evaluating it on a dataset from a private hospital significantly improves the mean absolute error from 1.09 (95% CI: 1.03-1.15) to 0.28 (95% CI: 0.25-0.32). Additionally, we compare classification and regression models built for the same task and demonstrate that regression outperforms classification.
Individually, the Internet of Things (IoT) and Artificial Intelligence (AI) are powerful technologies. When you combine AI and IoT, you get AIoT--theartificial Intelligence of things. You can think of Internet of Things devices as the digital nervous system while artificial Intelligence is the brain of a system. To fully understand AIoT, you must start with theinternet of things. When "things" such as wearable devices, refrigerators, digital assistants, sensors and other equipment are connected to the internet, can be recognized by other devices and collect and process data, you have the internet of things.Artificial intelligence is when a system can complete a set of tasks or learn from data in a way that seems intelligent.
In most real-world applications, it is seldom the case that a given observable evolves independently of its environment. In social networks, users' behavior results from the people they interact with, news in their feed, or trending topics. In natural language, the meaning of phrases emerges from the combination of words. In general medicine, a diagnosis is established on the basis of the interaction of symptoms. Here, we propose a new model, the Interactive Mixed Membership Stochastic Block Model (IMMSBM), which investigates the role of interactions between entities (hashtags, words, memes, etc.) and quantifies their importance within the aforementioned corpora. We find that interactions play an important role in those corpora. In inference tasks, taking them into account leads to average relative changes with respect to non-interactive models of up to 150\% in the probability of an outcome. Furthermore, their role greatly improves the predictive power of the model. Our findings suggest that neglecting interactions when modeling real-world phenomena might lead to incorrect conclusions being drawn.