Azevedo, Roger (McGill University) | Johnson, Amy (University of Memphis) | Burkett, Candice (University of Memphis) | Chauncey, Amber (University of Memphis) | Lintean, Mihai ( University of Memphis ) | Cai, Zhiqiang (University of Memphis) | Rus, Vasile (University of Memphis)
An experiment was conducted to test the efficacy of a new intelligent hypermedia system, MetaTutor, which is intended to prompt and scaffold the use of self-regulated learning (SRL) processes during learning about a human body system. Sixty-eight (N=68) undergraduate students learned about the human circulatory system under one of three conditions: prompt and feedback (PF), prompt-only (PO), and control (C) condition. The PF condition received timely prompts from animated pedagogical agents to engage in planning processes, monitoring processes, and learning strategies and also received immediate directive feedback from the agents concerning the deployment of the processes. The PO condition received the same timely prompts, but did not receive any feedback following the deployment of the processes. Finally, the control condition learned without any assistance from the agents during the learning session. All participants had two hours to learn using a 41-page hypermedia environment which included texts describing and static diagrams depicting various topics concerning the human circulatory system. Results indicate that the PF condition had significantly higher learning efficiency scores, when compared to the control condition. There were no significant differences between the PF and PO conditions. These results are discussed in the context of development of a fully-adaptive hypermedia learning system intended to scaffold self-regulated learning.
This work compares user collaboration with conversational personal assistants vs. teams of expert chatbots. Two studies were performed to investigate whether each approach affects accomplishment of tasks and collaboration costs. Participants interacted with two equivalent financial advice chatbot systems, one composed of a single conversational adviser and the other based on a team of four experts chatbots. Results indicated that users had different forms of experiences but were equally able to achieve their goals. Contrary to the expected, there were evidences that in the teamwork situation that users were more able to predict agent behavior better and did not have an overhead to maintain common ground, indicating similar collaboration costs. The results point towards the feasibility of either of the two approaches for user collaboration with conversational agents.
We evaluate the impact of tutor voice quality in the context of our intelligent tutoring spoken dialogue system. We first describe two versions of our system which yielded two corpora of human-computer tutoring dialogues: one using a tutor voice prerecorded by a human, and the other using a lowcost text-to-speech tutor voice. We then discuss the results of two-tailed t-tests comparing student learning gains, system usability, and dialogue efficiency across the two corpora and across corpora subsets. Overall, our results suggest that tutor voice quality may have only a minor impact on these metrics in the context of our tutoring system. We find that tutor voice quality does not impact learning gains, but it may impact usability and efficiency for some corpora subsets.
Traditional techniques for monitoring wildlife populations are temporally and spatially limited. Alternatively, in order to quickly and accurately extract information about the current state of the environment, tools for processing and recognition of acoustic signals can be used. In the past, a number of research studies on automatic classification of species through their vocalizations have been undertaken. In many of them, however, the segmentation applied in the preprocessing stage either implies human effort or is insufficiently described to be reproduced. Therefore, it might be unfeasible in real conditions. Particularly, this paper is focused on the extraction of local information as units --called instances-- from audio recordings. The methodology for instance extraction consists in the segmentation carried out using image processing techniques on spectrograms and the estimation of a needed threshold by the Otsu's method. The multiple instance classification (MIC) approach is used for the recognition of the sound units. A public data set was used for the experiments. The proposed unsupervised segmentation method has a practical advantage over the compared supervised method, which requires the training from manually segmented spectrograms. Results show that there is no significant difference between the proposed method and its baseline. Therefore, it is shown that the proposed approach is feasible to design an automatic recognition system of recordings which only requires, as training information, labeled examples of audio recordings.
Future Tense is a partnership of Slate, New America, and Arizona State University that examines emerging technologies, public policy, and society. Over two days of testimony before Congress earlier this month, Facebook founder and CEO Mark Zuckerberg dodged a litany of questions from lawmakers about how the data of 87 million Americans ended up in the hands of voter profiling firm Cambridge Analytica. The spectacle put a spotlight on the company's murky data-collection and sharing practices, and sparked a much-needed discussion about if and how to hold companies accountable for their handling of user data. However much deserved, Facebook has, so far, born the brunt of public scrutiny for what has unfortunately become standard practice for web platforms and services. As the Ranking Digital Rights 2018 Corporate Accountability Index--an annual ranking of the some of the world's most powerful internet, mobile, and telecommunications companies that was released this week--shows, companies across the board lack transparency about what user data they collect and share, and tell us alarmingly little about their data-sharing agreements with advertisers or other third parties.