Collaborating Authors

Invited Speaker Biographies

AAAI Conferences

Julie A. Adams is an Assistant Professor of Computer Science and Computer Engineering in the Electrical Engineering and Computer Science Department at Vanderbilt University, where she directs the Human-Machine Teaming Laboratory. Her research focuses on distributed artificially intelligent algorithms for autonomous multiple robot coalition formation and the development of complex humanmachine systems for large human and robotic teams.She has published on topics in autonomous robotic coalition formation, human-robot interaction, cognitive task analysis for robotic systems, and human factors. She worked in Human Factors for Honeywell, Inc. and the Eastman Kodak Company from 1995 to 2000. She was an Assistant Professor of Computer Science at Rochester Institute of Technology from 2000 until 2003. She is an appointed member of the National Research Council's Army Research Laboratory Technical Assessment Review Panel on Solider Systems and is the recipient of the NSF CAREER Award.


AAAI Conferences

An Approach to the Development of Technology to Empower the Elderly Edward Riseman, Allen Hanson, Roderick Grupen Computer Science Department, University of Massachusetts at Amherst Phebe Sessions, Julie Abramson, Mary Olson Smith School of Social Work, Smith College Candace Sidner, Mitsubishi Electric Research Lab The growing numbers of elderly individuals in need of support to live in the community will severely test the current services infrastructure. Part of the answer is to develop technological innovations that allow an elder population to successfully "age in place" with dignity and a sense of involvement with their community. However, we believe that it is essential to understand the needs of the target community through interdisciplinary perspectives of social science and computer science in partnership with potential elderly recipients of the technology themselves. Our team of researchers brings together social scientists and geriatric social work practitioners from Smith College and computer scientists who have expertise in computer vision, robotics, augmented and virtual reality, and intelligent user interfaces from the University of Massachussetts and Mitsubishi Electrical Research Laboratory (MERL). We believe that those who develop assistive technology should be sufficiently involved at the "ground level" with the elders themselves, their families, caregivers and service systems.

Artificial Intelligence: Friendly or Frightening?


Computer scientists, public figures and reporters have gathered to witness or take part in a decades-old challenge. Some of the participants are flesh and blood; others are silicon and binary. Thirty human judges sit down at computer terminals, and begin chatting. To determine whether they're talking to a computer program or a real person. The event, organized by the University of Reading, was a rendition of the so-called Turing test, developed 65 years ago by British mathematician and cryptographer Alan Turing as a way to assess whether a machine is capable of intelligent behavior indistinguishable from that of a human.

Artificial intelligence 'judge' developed by UCL computer scientists


Artificial intelligence software that can find patterns in highly complex decisions is being used to predict our taste in films, TV shows and music with ever-increasing accuracy. And now, after a breakthrough study by a group of British scientists, it could be used to predict the outcome of trials. Software that is able to weigh up legal evidence and moral questions of right and wrong has been devised by computer scientists at University College London, and used to accurately predict the result in hundreds of real life cases. The AI "judge" has reached the same verdicts as judges at the European court of humanrights in almost four in five cases involving torture, degrading treatment and privacy. The algorithm examined English language data sets for 584 cases relating to torture and degrading treatment, fair trials and privacy.

It isn't just Uber: Carnegie Mellon's computer science dean on its poaching problem


Andrew Moore was a professor of computer science and robotics at Carnegie Mellon University for a dozen years when Google hired him away in 2006 to lead some of its efforts around ad targeting and fraud prevention. CMU lured Moore back in 2014, making him the dean of its computer science school. But he still understands well what goes through his colleagues' minds when industry comes calling, and he says the battle to keep them in academia grows fiercer by the year. Earlier today, we talked with Moore about Uber, which famously raided the school's robotics department a year ago, poaching 40 of its researchers and scientists. We also talked about how Moore entices people to stay, and the newest new thing his 2,000-student school is focused on right now.