Dai, Hong-Jie (Academia Sinica and National Tsing Hua University) | Tsai, Wei-Chi (Yuan Ze University) | Tsai, Richard Tzong-Han (Yuan Ze University) | Hsu, Wen-Lian (Academia Sinica and National Tsing Hua University)
In this paper, we describe how we integrated an artificial intelligence (AI) system into the PubMed search website using augmented browsing technology. Our system dynamically enriches the PubMed search results displayed in a user’s browser with semantic annotation provided by several natural language processing (NLP) subsystems, including a sentence splitter, a part-of-speech tagger, a named entity recognizer, a section categorizer and a gene normalizer (GN). After our system is installed, the PubMed search results page is modified on the fly to categorize sections and provide additional information on gene and gene products identified by our NLP subsystems. In addition, GN involves three main steps: candidate ID matching, false positive filtering and disambiguation, which are highly dependent on each other. We propose a joint model using a Markov logic network (MLN) to model the dependencies found in GN. The experimental results show that our joint model outperforms a baseline system that executes the three steps separately. The developed system is available at https://sites.google.com/site/pubmedannotationtool4ijcai/home.
Most people use Google's search-by-image feature to either look for copyright infringement, or for shopping. See some shoes you like on a frenemy's Instagram? Search will pull up all the matching images on the web, including from sites that will sell you the same pair. In order to do that, Google's computer vision algorithms had to be trained to extract identifying features like colors, textures, and shapes from a vast catalogue of images. Luis Ceze, a computer scientist at the University of Washington, wants to encode that same process directly in DNA, making the molecules themselves carry out that computer vision work. And he wants to do it using your photos.
Planning in partially observable environments remains a challenging problem, despite significant recent advances in offline approximation techniques. A few online methods have also been proposed recently, and proven to be remarkably scalable, but without the theoretical guarantees of their offline counterparts. Thus it seems natural to try to unify offline and online techniques, preserving the theoretical properties of the former, and exploiting the scalability of the latter. In this paper, we provide theoretical guarantees on an anytime algorithm for POMDPs which aims to reduce the error made by approximate offline value iteration algorithms through the use of an efficient online searching procedure. The algorithm uses search heuristics based on an error analysis of lookahead search, to guide the online search towards reachable beliefs with the most potential to reduce error.
At Google's annual I/O developers conference held May 28, the search engine giant unveiled new software features, as well as improvements to the Android operating system. Much of the event focused on new context-aware improvements to Google Now, Android's digital personal assistant, but one of the software's biggest features was actually rolled out secretly. This Tuesday, a representative for Google at the Search Marketing Expo in Paris premiered Google Now's location-aware search feature. The tool, which has already rolled out to most Android devices, was captured on video by Search Engine Land's Danny Sullivan and posted to Twitter. Location Aware Search is live unannounced feature in Google Search App.