This research report introduces the generation of textual entailment within the project CSIEC (Computer Simulation in Educational Communication), an interactive web-based human-computer dialogue system with natural language for English instruction. The generation of textual entailment (GTE) is critical to the further improvement of CSIEC project. Up to now we have found few literatures related with GTE. Simulating the process that a human being learns English as a foreign language we explore our naive approach to tackle the GTE problem and its algorithm within the framework of CSIEC, i.e. rule annotation in NLML, pattern recognition (matching), and entailment transformation. The time and space complexity of our algorithm is tested with some entailment examples. Further works include the rules annotation based on the English textbooks and a GUI interface for normal users to edit the entailment rules.
Pathologists agreed just three-quarters of the time when diagnosing breast cancer from biopsy specimens, according to a recent study. The difficult, time-consuming process of analyzing tissue slides is why pathology is one of the most expensive departments in any hospital. Faisal Mahmood, assistant professor of pathology at Harvard Medical School and the Brigham and Women's Hospital, leads a team developing deep learning tools that combine a variety of sources -- digital whole slide histopathology data, molecular information, and genomics -- to aid pathologists and improve the accuracy of cancer diagnosis. Mahmood, who heads his eponymous Mahmood Lab in the Division of Computational Pathology at Brigham and Women's Hospital, spoke this week about this research at GTC DC, the Washington edition of our GPU Technology Conference. The variability in pathologists' diagnosis "can have dire consequences, because an uncertain determination can lead to more biopsies and unnecessary interventional procedures," he said in a recent interview.
Tissue biopsy slides stained using hematoxylin and eosin (H&E) dyes are a cornerstone of histopathology, especially for pathologists needing to diagnose and determine the stage of cancers. A research team led by MIT scientists at the Media Lab, in collaboration with clinicians at Stanford University School of Medicine and Harvard Medical School, now shows that digital scans of these biopsy slides can be stained computationally, using deep learning algorithms trained on data from physically dyed slides. Pathologists who examined the computationally stained H&E slide images in a blind study could not tell them apart from traditionally stained slides while using them to accurately identify and grade prostate cancers. What's more, the slides could also be computationally "de-stained" in a way that resets them to an original state for use in future studies, the researchers conclude in their May 20 study published in JAMA Network. This process of computational digital staining and de-staining preserves small amounts of tissue biopsied from cancer patients and allows researchers and clinicians to analyze slides for multiple kinds of diagnostic and prognostic tests, without needing to extract additional tissue sections.
The current study examined the degree to which the quality and characteristics of students’ essays could be modeled through dynamic natural language processing analyses. Undergraduate students (n = 131) wrote timed, persuasive essays in response to an argumentative writing prompt. Recurrent patterns of the words in the essays were then analyzed using recurrence quantification analysis (RQA). Results of correlation and regression analyses revealed that the RQA indices were significantly related to the quality of students’ essays, at both holistic and sub-scale levels (e.g., organization, cohesion). Additionally, these indices were able to account for between 11% and 43% of the variance in students’ holistic and sub-scale essay scores. Overall, our results suggest that dynamic techniques can be used to improve natural language processing assessments of student essays.
Roscoe, Rod (University of Memphis) | Varner, Laura (University of Memphis) | Cai, Zhiqiang (University of Memphis) | Weston, Jennifer (University of Memphis) | Crossley, Scott (Georgia State University) | McNamara, Danielle (University of Memphis)
Research on automated essay scoring (AES) indicates that computer-generated essay ratings are comparable to human ratings. However, despite investigations into the accuracy and reliability of AES scores, less attention has been paid to the feedback delivered to the students. This paper presents a method developers can use to quickly evaluate the usability of an automated feedback system prior to testing with students. Using this method, researchers evaluated the feedback provided by the Writing-Pal, an intelligent tutor for writing strategies. Lessons learned and potential for future research are discussed.