Mannem, Prashanth
Learning Greedy Policies for the Easy-First Framework
Xie, Jun (Oregon State University) | Ma, Chao (Oregon State University) | Doppa, Janardhan Rao (Washington State University) | Mannem, Prashanth (Oregon State University) | Fern, Xiaoli (Oregon State University) | Dietterich, Thomas G. (Oregon State University) | Tadepalli, Prasad (Oregon State University)
Easy-first, a search-based structured prediction approach, has been applied to many NLP tasks including dependency parsing and coreference resolution. This approach employs a learned greedy policy (action scoring function) to make easy decisions first, which constrains the remaining decisions and makes them easier. We formulate greedy policy learning in the Easy-first approach as a novel non-convex optimization problem and solve it via an efficient Majorization Minimizatoin (MM) algorithm. Results on within-document coreference and cross-document joint entity and event coreference tasks demonstrate that the proposed approach achieves statistically significant performance improvement over existing training regimes for Easy-first and is less susceptible to overfitting.
A New Approach to Ranking Over-Generated Questions
McConnell, Claire Cooper (University of Pennsylvania) | Mannem, Prashanth ( International Institute of Information Technology ) | Prasad, Rashmi ( University of Wisconsin-Milwaukee ) | Joshi, Aravind (University of Pennsylvania)
We discuss several improvements to the Question Generation Shared Task Evaluation Challenge (QGSTEC) system developed at the University of Pennsylvania in 2010. In addition to enhancing the question generation rules, we have implemented two new components to improve the ranking process. We use topic scoring, a technique developed for summarization, to identify important information for questioning, and language model probabilities to measure grammaticality. Preliminary experiments show that our approach is feasible.