Tracking the health of underwater species is critical to understanding the effects of climate change on marine ecosystems. Unfortunately, it's a time-consuming process -- biologists conduct studies with echosounders that use sonar to determine water and object depth, and they manually interpret the resulting 2D echograms. These interpretations are often prone to error and require pricey software like Echoview. Fortunately, a team of research scientists hailing from the University of Victoria in Canada are developing a machine learning method for detecting specific biological targets in acoustic survey data. In a preprint paper ("A Deep Learning based Framework for the Detection of Schools of Herring in Echograms"), they say that their approach -- which they tested on schools of herring -- might measurably improve the accuracy of environmental monitoring.
Computers have already beaten us at chess, Jeopardy and Go, the ancient board game from Asia. And now, in the raging war with machines, human beings have lost yet another battle -- over typing. Turns out voice recognition software has improved to the point where it is significantly faster and more accurate at producing text on a mobile device than we are at typing on its keyboard. The study ran tests in English and Mandarin Chinese. Baidu chief scientist Andrew Ng says this should not feel like defeat.
"Siri, do I have Parkinson's?" That might sound flippant, but actually new research shows that it's possible to detect Parkinson's symptoms simply by using algorithms to detect changes in voice recordings. Parkinson's, a degenerative disorder of the central nervous system, is usually diagnosed through analysis of symptoms along with expensive medical imaging to rule out other conditions--though there is currently no concrete method for detecting it. Max Little, from the University of Oxford, has different ideas. He's been developing software that learns to detect differences in voice patterns, in order to spot distinctive clues associated with Parkinson's.
Sharifa M, Alghowinem (Australian National University and Ministry of Higher Education, Kingdom of Saudi Arabia) | Goecke, Roland (Australian National University and University of Canberra) | Wagner, Michael (University of Canberra) | Epps, Julien (University of New South Wales) | Breakspear, Michael (University of New South Wales and Queensland Institute of Medical Research) | Parker, Gordon (University of New South Wales)
Depression and other mood disorders are common and disabling disorders. We present work towards an objective diagnostic aid supporting clinicians using affective sensing technology with a focus on acoustic and statistical features from spontaneous speech. This work investigates differences in expressing positive and negative emotions in depressed and healthy control subjects as well as whether initial gender classification increases the recognition rate. To this end, spontaneous speech from interviews of 30 subjects of each depressed and controls was analysed, with a focus on questions eliciting positive and negative emotions. Using HMMs with GMMs for classification with 30-fold cross-validation, we found that MFCC, energy and intensity features gave highest recognition rates when female and male subjects were analysed together. When the dataset was first split by gender, log energy and shimmer features, respectively, were found to give the highest recognition rates in females, while it was loudness for males. Overall, correct recognition rates from acoustic features for depressed female subjects were higher than for male subjects. Using statistical features, we found that the response time and average syllable duration were longer in depressed subjects, while the interaction involvement and articulation rate were higher in control subjects.
GENEVA (Reuters) - China and the United States are ahead of the global competition to dominate artificial intelligence (AI), according to a study by the U.N. World Intellectual Property Organization (WIPO) published on Thursday. The study found U.S. tech giant IBM had by far the biggest AI patent portfolio, with 8,920 patents, ahead of Microsoft with 5,930 and a group of mainly Japanese tech conglomerates. China accounted for 17 of the top 20 academic institutions involved in patenting AI and was particularly strong in the fast growing area of "deep learning" - a machine-learning technique that includes speech recognition systems. "The U.S. and China obviously have stolen a lead. They're out in front in this area, in terms of numbers of applications, and in scientific publications," WIPO Director-General Francis Gurry told a news conference.