A lot of people don't like the word "moist." Several Facebook groups are dedicated to it, one with over 3,000 likes, New Yorker readers overwhelmingly selected it as the word to eliminate from the dictionary, and Jimmy Fallon sarcastically thanked it for being the worst word in the English language. When you ask people why this might be, there is no shortage of armchair theory: that there's something about the sounds involved, that it puts your face in a position similar to the facial expression of disgust, or that it reminds people of mold or sex.
Emerging anxieties pertaining to the rapid advancement and sophistication of artificial intelligence appear to be on a collision course with historic models of human exceptionality and individuality. Yet it is not just objective, technical sophistication in the development of AI that seems to cause this angst. It is also the linguistic treatment of machine "intelligence." But what is really at stake? Are we truly concerned that we will be surpassed in our capacities as human beings?
Such creative software can be used for autonomous creative tasks, such as inventing mathematical theories, writing poems, painting pictures, and composing music. However, computational creativity studies also enable us to understand human creativity and to produce programs for creative people to use, where the software acts as a creative collaborator rather than a mere tool. Historically, it's been difficult for society to come to terms with machines that purport to be intelligent and even more difficult to admit that they might be creative. For instance, in 1934, some professors at the University of Manchester in the United Kingdom built meccano models that were able to solve some mathematical equations. Groundbreaking for its time, this project was written up in a piece in Meccano Magazine.
A number of approaches have been advanced for taking data about a user's likes and dislikes and generating a general profile of the user. These profiles can be used to retrieve documents matching user interests; recommend music, movies, or other similar products; or carry out other tasks in a specialized fashion. This article presents a fundamentally new method for generating user profiles that takes advantage of a large-scale database of demographic data. These data are used to generalize user-specified data along the patterns common across the population, including areas not represented in the user's original data. The input data most often take the form of samples of the user's interests or preferences in a given area, and the profile is a generalization of these data that can be used generatively to carry out tasks on behalf of the user.
We have developed an autonomous robot system that takes well-composed photographs of people at social events, such as weddings and conference receptions. In this article, we outline the overall architecture of the system and describe how the various components interrelate. We also describe our experiences deploying the robot photographer at a number of real-world events. The system is capable of operating in unaltered environments and has been deployed at a number of real-world events. This article gives an overview of the entire robot photographer system, and provides details of the architecture underlying the implementation.
Conspicuously absent from the 5th Generation Computer Project's proclaimed goals is one vitally important in a 1990's knowledge-intensive society.....the ability to help people tame mountains of video-based information. A decade from now, the nation will be crisscrossed with fiberoptic bundles capable of simultaneously carrying thousands of hiresolution video conversations, and solid-state video cameras will be as abundant as microphone pickup devices are today. In short, the voice-telephone and printed-page information networks over which we communicate will be joined by 2-way, super-narrowcast video, where each knowledge worker both receives product from myriad sources ad reshapes and originates his own unique product. The main activities interactive video will support are the same ones that will occupy people a decade from nowlearning and teaching. Already, one can "walk through" homes for sale thousands of miles away, learn how to assemble, operate and fix complex machinery, drive around the streets of Aspen, Colorado, and learn facial communication skills using this powerful medium.
The chapters in this book examine the state of today's agent technology and point the way toward the exciting developments of the next millennium. Contributors include Donald A. Norman, Nicholas Negroponte, Brenda Laurel, Thomas Erickson, Ben Shneiderman, Thomas W. Malone, Pattie Maes, David C. Smith, Gene Ball, Guy A. Boy, Doug Riecken, Yoav Shoham, Tim Finin, Michael R. Genesereth, Craig A. Knoblock, Philip R. Cohen, Hector J. Levesque, and James E. White, among others. Held at San Francisco's W Hotel, the conference included work from researchers and practitioners who are developing novel user interface and interaction paradigms that incorporate advanced reasoning and modeling techniques. In the past few years, user interfaces have faced increasingly challenging tasks, larger numbers of users with a wide range of computer skills, and the widespread use of new platforms such as mobile devices. These trends have led to a need for advanced techniques for communication and collaboration, personalization and adaptation of behavior, agent-based assistance, integrated multimodal interfaces, and a variety of intelligent front ends for complex environments and tasks.
In 2008, the AAAI Robotics organizers eschewed the previous format of a Robot Competition, choosing instead to focus on groundbreaking work representing two areas of robotics: creativity and mobility and manipulation (detailed in a separate article). Both workshops were held on July 14, and the Robotics Exhibition included participants from both categories. The Robotics and Creativity Workshop was made possible through the support of the National Science Foundation's CreativeIT program and Microsoft Research. Developments in mechanical control and complex motion planning have enabled robots to become almost commonplace in situations requiring precise but menial, tedious, and repetitive tasks. Recent robotics research has targeted the mechanical and computational challenges inherent in performing a much broader range of tasks autonomously.
Following a brief overview discussing why people prefer listening to expressive music instead of nonexpressive synthesized music, we examine a representative selection of well-known approaches to expressive computer music performance with an emphasis on AIrelated approaches. In the main part of the article we focus on the existing CBR approaches to the problem of synthesizing expressive music, and particularly on Tempo-Express, a case-based reasoning system developed at our Institute, for applying musically acceptable tempo transformations to monophonic audio recordings of musical performances. Finally we briefly describe an ongoing extension of our previous work consisting of complementing audio information with information about the gestures of the musician. Music is played through our bodies, therefore capturing the gesture of the performer is a fundamental aspect that has to be taken into account in future expressive music renderings. This article is based on the "2011 Robert S. Engelmore Memorial Lecture" given by the first author at AAAI/IAAI 2011.