Collaborating Authors

The use of Virtual Reality in Enhancing Interdisciplinary Research and Education Artificial Intelligence

Virtual Reality (VR) is increasingly being recognized for its educational potential and as an effective way to convey new knowledge to people, it supports interactive and collaborative activities. Affordable VR powered by mobile technologies is opening a new world of opportunities that can transform the ways in which we learn and engage with others. This paper reports our study regarding the application of VR in stimulating interdisciplinary communication. It investigates the promises of VR in interdisciplinary education and research. The main contributions of this study are (i) literature review of theories of learning underlying the justification of the use of VR systems in education, (ii) taxonomy of the various types and implementations of VR systems and their application in supporting education and research (iii) evaluation of educational applications of VR from a broad range of disciplines, (iv) investigation of how the learning process and learning outcomes are affected by VR systems, and (v) comparative analysis of VR and traditional methods of teaching in terms of quality of learning. This study seeks to inspire and inform interdisciplinary researchers and learners about the ways in which VR might support them and also VR software developers to push the limits of their craft.

Disputing Dijkstra, and Birthdays in Base 2

Communications of the ACM

Edsger Dijkstra's 1988 paper "On the Cruelty of Really Teaching Computer Science" (in plain text form at is one of the most well-cited papers on computer science (CS) education. A growing body of recent research explores the very topic that Dijkstra tried to warn us away from--how we learn and teach computer science with metaphor. According to Google Scholar, Dijkstra's paper has been cited 571 times. In contrast, the most-cited paper in all of the ACM Digital Library papers related to SIGCSE has 412 citations (see data at Dijkstra's paper has been cited more than any peer-reviewed CS education research.

Testing AI-based apps? Think like a human TechBeacon


Your testing of software that includes artificial intelligence (AI) components will be more sophisticated and robust if you just think in human terms. If you want to understand the testing requirements for things such as predictive analytics, you need to think about how AI "learns its world." For example, you'll want to know where--and how--predictions fall apart, as well as the potential weaknesses of an algorithm and how to find them. Like people, machines have past experiences. But those experiences are provided by the programmers who create the training sets of historical data against which the system can learn.

From Constructionist to Constructivist A.I.

AAAI Conferences

The development of artificial intelligence systems has to date been largely one of manual labor. This Constructionist approach to A.I. has resulted in a diverse set of isolated solutions to relatively small problems. Small success stories of putting these pieces together in robotics, for example, has made people optimistic that continuing on this path would lead to artificial general intelligence. This is unlikely. "The A.I. problem" has been divided up without much guidance from science or theory, resulting in a fragmentation of the research community and a set of grossly incompatible approaches. Standard software development methods come with serious limitations in scaling; in A.I. the Constructionist approach results in systems with limited domain application and severe performance brittleness. Genuine integration, as required for general intelligence, is therefore practically and theoretically precluded. Yet going beyond current A.I. systems requires significantly more complex integration than attempted to date, especially regarding transversal functions such as attention and learning. The only way to address the challenge is replacing top-down architectural design as a major development methodology with methods focusing on self-generated code and self-organizing architectures. I call this Constructivist A.I., in reference to the self-constructive principles on which it must be based. Methodologies employed for Constructivist A.I. will be very different from today's software development methods. In this paper I describe the argument in detail and examine some of the implications of this impending paradigm shift.

CS Unplugged or Coding Classes?

Communications of the ACM

Computer science unplugged (CS Unplugged, or just "Unplugged") is a pedagogy for teaching computational ideas to grade-school students without using a computer.a It was developed in the early 1990s as a necessity when working with computers in the classroom was not usually practical, but it still finds widespread adoption as a supplement to computer-based lessons, even where devices are readily available. This appears as a contradiction to some (if you are teaching computer science, why not spend as much time as possible on a computer?), Unfortunately, Unplugged can also be used to justify poor decisions by treating it as a complete curriculum in itself--a teacher who does not have the time or support to extend themselves in new curriculum content might rely on Unplugged as "enough," or administrators might justify a lack of funding by suggesting that schools use Unplugged teaching instead of buying devices. The Unplugged approach is widely used, mentioned in dozens of research papers about CS education, has been translated into many languages, and is widely used in teacher professional development.1