Artificial intelligence (AI) is responsible for driving autonomous vehicles, powering intelligent assistants such as Alexa and Siri, and placing annoying advertisements on web pages. AI has also improved many aspects of pediatric medicine, and played an important role in the COVID-19 pandemic. Voice recognition/dictation software is an example of AI that is currently used in pediatric practice. Today, Dragon Medical One from Nuance Communications, the most widely used voice recognition medical software, boasts a vocabulary of 300,000 words and integrates vocabularies for 90 medical specialties. By integrating deep learning (DL), the software covers the nuances of the user's speech patterns and improves over time, achieving 99% accuracy.1
When my daughter Ella was little, I'd often see various tech products on my Facebook feed purporting to calm parents who were anxious about their baby's sleep. Next to feeding, there's likely no more anxiety-prone part of the day than a child's bedtime – the fear they're not on a schedule, or that once they get on one, it's the wrong one, or that once they're actually asleep, they might never wake up. I dismissed them out of hand – we were sharing a room with Ella, and I was aware of her every snort and snuffle, though in retrospect, it seems obvious that we could have had her sleeping through the night a little earlier … if we'd only settled on a sleep training method. Whichever one you pick neatly slots you into a parental taxonomy. Are you an adherent of Dr Richard Ferber's method, popularized in the 1980s, which encourages you to let your child "cry it out" until you're popping Xanax like popcorn?
Apple has paused plans to scan devices for child abuse and exploitation material after the tool prompted concern among users and privacy groups. Announced last month, the new safety features were intended for inclusion in iOS 15, iPadOS 15, watchOS 8, and macOS Monterey. The first was a feature for monitoring the Messages application, with client-side machine learning implemented to scan and alert when sexually explicit images are sent, requiring input from the user of whether or not they want to view the material. "As an additional precaution, the child can also be told that, to make sure they are safe, their parents will get a message if they do view it," the company explained. The second batch of changes impacted Siri and Search, with updates included to provide additional information for parents and children to warn them when they stumbled into "unsafe" situations, as well as to "intervene" if a search for Child Sexual Abuse Material (CSAM) was performed by a user.
Researchers at the University of South Australia have designed a computer vision system that can automatically detect a tiny baby's face in a hospital bed and remotely monitor its vital signs from a digital camera with the same accuracy as an electrocardiogram machine. Using artificial intelligence-based software to detect human faces is now common with adults, but this is the first time that researchers have developed software to reliably detect a premature baby's face and skin when covered in tubes, clothing, and undergoing phototherapy. Engineering researchers and a neonatal critical care specialist from UniSA remotely monitored heart and respiratory rates of seven infants in the Neonatal Intensive Care Unit (NICU) at Flinders Medical Centre in Adelaide, using a digital camera. One of the lead researchers, UniSA Professor Javaan Chahl, stated that babies in neonatal intensive care can be extra difficult for computers to recognise because their faces and bodies are obscured by tubes and other medical equipment. Many premature babies are being treated with phototherapy for jaundice, so they are under bright blue lights, which also makes it challenging for computer vision systems.
Apple has indefinitely delayed the introduction of its new anti-child abuse features, following widespread outcry from privacy and security campaigners. The company had said that the two new tools – which attempt to detect when children are being sent inappropriate photos, and when people have child sexual abuse material on their devices – were necessary as a way to stop the grooming and exploitation of children. But campaigners argued that they increased the privacy risks for other users of the phone. Critics said that the tools could be used to scan for other kinds of material, and that they undermined Apple's public commitment to privacy as a human right. Now Apple said that it will indefinitely delay those features, with a view to improving them before they are released. "Last month we announced plans for features intended to help protect children from predators who use communication tools to recruit and exploit them, and limit the spread of child sexual abuse material," Apple said.
Apple said it will delay the release of child safety features which included scanning phones in the U.S. for images of child abuse. In a statement emailed to USA TODAY Friday, Apple said it would take time to consider improvements to the features, which had been criticized for potentially harming users' privacy. "Based on feedback from customers, advocacy groups, researchers and others, we have decided to take additional time over the coming months to collect input and make improvements before releasing these critically important child safety features," said the company. Introduced in August, the child safety features were aimed at limiting the spread of Child Sexual Abuse Material (CSAM). The features included tools in the Messages app warning kids and parents when they send or receive sexually explicit photos.
Using artificial intelligence-based software to detect human faces is now common with adults, but this is the first time that researchers have developed software to reliably detect a premature baby's face and skin when covered in tubes, clothing, and undergoing phototherapy. Engineering researchers and a neonatal critical care specialist from UniSA remotely monitored heart and respiratory rates of seven infants in the Neonatal Intensive Care Unit (NICU) at Flinders Medical Centre in Adelaide, using a digital camera. "Babies in neonatal intensive care can be extra difficult for computers to recognise because their faces and bodies are obscured by tubes and other medical equipment," says UniSA Professor Javaan Chahl, one of the lead researchers. "Many premature babies are being treated with phototherapy for jaundice, so they are under bright blue lights, which also makes it challenging for computer vision systems." The'baby detector' was developed using a dataset of videos of babies in NICU to reliably detect their skin tone and faces. Vital sign readings matched those of an electrocardiogram (ECG) and in some cases appeared to outperform the conventional electrodes, endorsing the value of non-contact monitoring of pre-term babies in intensive care.
The cameras automatically detected the faces of infants under normal light and under blue light. Researchers of the University of South Australia (UniSA) have developed an AI-based system that can be embedded into digital cameras to help doctors automatically detect a premature baby's face and skin, as well as remotely monitor their vital signs while in intensive care. The research, published in Journal of Imaging, outlined the aim of the study was part of an ongoing project at UniSA that aims to replace contact-based electrical sensors, which require using adhesive pads that can result in skin tearing and potential infections, with non-contact video cameras to monitor premature babies. As part of the research, UniSA researchers used high-resolution cameras to remotely monitor the heart and respiratory rates of seven infants at Adelaide's Flinders Medical Centre Neonatal Intensive Care Unit (NICU). The infants were filmed using the cameras at close range so that vital physiological data, such as heart beats and subtle body movements, could be captured using the AI processing techniques.
Apple has released yet more details on its new photo-scanning features, as the controversy over whether they should be added to the iPhone continues. Earlier this month, Apple announced that it would be adding three new features to iOS, all of which are intended to fight against child sexual exploitation and the distribution of abuse imagery. One adds new information to Siri and search, another checks messages sent to children to see if they might contain inappropriate images, and the third compares photos on an iPhone with a database of known child sexual abuse material (CSAM) and alerts Apple if it is found. It is the latter of those three features that has proven especially controversial. Critics say that the feature is in contravention of Apple's commitment to privacy, and that it could in the future be used to scan for other kinds of images, such as political pictures on the phones of people living in authoritarian regimes.