The boom of mobile devices and cloud services has led to an explosion of personal photo and video data. However, due to the missing user-generated metadata such as titles or descriptions, it usually takes a user a lot of swipes to find some video on the cell phone. To solve the problem, we present an innovative idea called Visual Memory QA which allow a user not only to search but also to ask questions about her daily life captured in the personal videos. The proposed system automatically analyzes the content of personal videos without user-generated metadata, and offers a conversational interface to accept and answer questions. To the best of our knowledge, it is the first to answer personal questions discovered in personal photos or videos. The example questions are "what was the lat time we went hiking in the forest near San Francisco?"; "did we have pizza last week?"; "with whom did I have dinner in AAAI 2015?".
YouTube presents an unprecedented opportunity to explore how machine learning methods can improve healthcare information dissemination. We propose an interdisciplinary lens that synthesizes machine learning methods with healthcare informatics themes to address the critical issue of developing a scalable algorithmic solution to evaluate videos from a health literacy and patient education perspective. We develop a deep learning method to understand the level of medical knowledge encoded in YouTube videos. Preliminary results suggest that we can extract medical knowledge from YouTube videos and classify videos according to the embedded knowledge with satisfying performance. Deep learning methods show great promise in knowledge extraction, natural language understanding, and image classification, especially in an era of patient-centric care and precision medicine.
Despite their potential, approaches from artificial intelligence are still rarely used in addressing the biodiversity crisis," he says. Many social media platforms provide an application programming interface that allows researchers to access user-generated text, images and videos, as well as the accompanying metadata, such as where and when the content was uploaded, and connections between the users. Assistant professor Tuomo Hiippala highlights how machine learning methods can be used to process the language of social media posts. "Natural language processing can be used to infer the meaning of a sentence and to classify the sentiment of social media users towards illegal wildlife trade. Most importantly, machine learning algorithms can process combinations of verbal, visual and audio-visual content," Hiippala says.
IEEE is essential to the global technical community and to technical professionals everywhere, and we are universally recognized for the contributions of tech...nology and of technical professionals in improving global conditions. IEEE Social Media Guidelines IEEE social media pages have been created for those interested in technology and innovation to engage in discussion about IEEE, engineering and technology. All user-generated content; including video, photos, wall posts and comments are not meant to reflect the opinions of IEEE, its societies, affiliates, employees or shareholders. IEEE does not endorse the opinions expressed by users of IEEE social media presences. Our pages exists for our community and we encourage all to participate by leaving comments, opinions, photos and videos.