UC San Diego is creating an outdoor site where it can test fly unmanned aerial vehicles, which are rapidly coming into common use by everyone from police investigating crime scenes to scientists looking for archaeological remains. The aerodrome will be a net cage that will be 30 feet high and roughly 50 feet long and wide, making it similar to a facility that's being built at the University of Michigan, a leader in drone research. San Diego chipmaker Qualcomm gave UC San Diego $200,000 to create the flight center, which is meant to help promote the school's quickly expanding research in robotic systems. The campus recently announced that it will begin testing driverless vehicles on university roads next year, using golf carts to deliver packages. The research will begin about the time that engineers start to extensively use the aerodrome.
The past decade has witnessed an increasing interest in the use of virtual coaches in healthcare. This paper describes a virtual coach to provide mindfulness meditation training, and the coaching support necessary to begin a regular practice. The coach is implemented as an embodied conversational character, and provides mindfulness training and coaching support via a web-based application. The coach is represented as a female character, capable of showing a variety of affective and conversational expressions, and interacts with the user via a mixed-initiative, text-based, natural language dialogue. The coach adapts both its facial expressions and the dialogue content to the user’s learning needs and motivational state. Findings from a pilot evaluation study indicate that the coach-based training is more effective in helping users establish a regular practice than self-administered training via written and audio materials. The paper concludes with an analysis of the coach features that contribute to these results, discussion of key challenges in affect-adaptive coaching, and plans for future work.
Analysis of spontaneous speech is an important tool for clinical linguists to diagnose various dementia types that affect the language processing areas. Prosody is affected by some dementia types, most notably Parkinson's disease (PD, degradation of voice quality, unstable pitch), Alzheimer's disease (AD, monotonic pitch), and the non-fluent type of Primary Progressive Aphasia (PPA-NF, hesitant, non-fluent speech). Prosodic features can be computed efficiently by software. In this study, we evaluate the performance of a SVM classifier that is trained on prosodic features only. The limitation to only prosody yields baseline results that can be used in a later stage to evaluate the added effect of variables of (morpho) syntax. The goal is to distinguish different dementia types based on the recorded speech. Results show that the classifier can distinguish some dementia types (PPA-NF, AD), but not others (PD, PPA-SD).
WASHINGTON D.C. [USA]: According to a recent study, a new artificial intelligence technology can accurately identify rare genetic disorders using a photograph of a patient's face. Named DeepGestalt, the AI technology outperformed clinicians in identifying a range of syndromes in three trials and could add value in personalised care, CNN reported. The study was published in the journal Nature Medicine. According to the study, eight per cent of the population has disease with key genetic components and many may have recognisable facial features. The study further adds that the technology could identify, for example, Angelman syndrome, a disorder affecting the nervous system with characteristic features such as a wide mouth with widely spaced teeth etc. Speaking about it, Yaron Gurovich, the chief technology officer at FDNA and lead researcher of the study said, "It demonstrates how one can successfully apply state of the art algorithms, such as deep learning, to a challenging field where the available data is small, unbalanced in terms of available patients per condition, and where the need to support a large amount of conditions is great."
A recent research study could give a voice to those who no longer have one. Scientists used electrodes and artificial intelligence to create a device that can translate brain signals into speech. This technology could help restore the ability to speak in people with brain injuries or those with neurological disorders such as epilepsy, Alzheimer disease, multiple sclerosis, Parkinson's disease and more. The new system being developed in the laboratory of Edward Chang, MD shows that it is possible to create a synthesized version of a person's voice that can be controlled by the activity of their brain's speech centers. In the future, this approach could not only restore fluent communication to individuals with a severe speech disability, the authors say, but could also reproduce some of the musicality of the human voice that conveys the speaker's emotions and personality.