This research report introduces the generation of textual entailment within the project CSIEC (Computer Simulation in Educational Communication), an interactive web-based human-computer dialogue system with natural language for English instruction. The generation of textual entailment (GTE) is critical to the further improvement of CSIEC project. Up to now we have found few literatures related with GTE. Simulating the process that a human being learns English as a foreign language we explore our naive approach to tackle the GTE problem and its algorithm within the framework of CSIEC, i.e. rule annotation in NLML, pattern recognition (matching), and entailment transformation. The time and space complexity of our algorithm is tested with some entailment examples. Further works include the rules annotation based on the English textbooks and a GUI interface for normal users to edit the entailment rules.
CSIEC (Computer Simulation in Educational Communication), is not only an intelligent web-based human-computer dialogue system with natural language for English instruction, but also a learning assessment system for learners and teachers. Its multiple functions—including grammar-based gap filling exercises, scenario show, free chatting and chatting on a given topic—can satisfy the various requirements for students with different backgrounds and learning abilities. After a brief explanation of the conception of our dialogue system, as well as a survey of related works, we will illustrate the system structure, and describe its pedagogical functions with the underlying AI techniques in detail such as NLP and rule-based reasoning. We will summarize the free Internet usage within a six month period and its integration into English classes in universities and middle schools. The evaluation findings about the class integration show that the chatting function has been improved and frequently utilized by the users, and the application of the CSIEC system on English instruction can motivate the learners to practice English and enhance their learning process. Finally, we will conclude with potential improvements.
How can scientists deal with the huge volume of new research publish on a daily basis? How can computers go further than merely parsing scientific papers, and actually suggest hypotheses themselves? When will we see a computer as another member of the lab team, serving hundreds of scientists simultaneously from its huge data set of extant research? This is the work of John Bachman, a systems biology PhD from Harvard Medical School, and Ben Giori, a postdoctoral fellow at Harvard Medical School's systems pharmacology lab. They're part of Darpa's Big Mechanism project, which is developing technology to read research abstracts and papers to extract pieces of causal mechanisms, then to assemble these pieces into more complete causal models, and to produce explanations.
Amazon.com's Alexa has mastered Hindi in just a few years. The voice assistant introduced to India in 2017 gets a major local makeover for one of the largest retail markets. From Wednesday, Amazon launches a version that now speaks Hindi and Hinglish -- a unique blend with English. It can also switch automatically between all three. The new, improved Alexa and Echo speakers hit the market in time for the Diwali shopping season.
It seems like voice interfaces are going to be a big part of the future of computing; popping up in phones, smart speakers, and even household appliances. But how useful is this technology for people who don't communicate using speech? Are we creating a system that locks out certain users? These were the questions that inspired software developer Abhishek Singh to create a mod that lets Amazon's Alexa assistant understand some simple sign language commands. In a video, Singh demonstrates how the system works.