Computer Theater: Stage for Action Understanding

AAAI Conferences

Action is the basis of theater 1 and, as such, needs to be fully incorporated in whatever model a computer is running during a computer-based theatrical performance. We believe the lack of good models for action is one fundamental reason for the relative absence of experiments involving theater and computers. The attempts to wire up stages or performers have been in general concerned with dance (Lovell Mitchell 1995), only using information about the position and attitude of the actors/dancers on the stage. The main argument of this paper is that computer theater not only requires action representation and recognition but it is also an interesting domain for action research. To support our argument we begin by examining the multiple possibilities of using computers in theatrical performances, concerning both explored and unexplored developments. Recent theatrical experiences are prefered for citation rather than old ones in order to draw a picture of the current research.


An AI Watched 600 Hours of TV and Started to Accurately Predict What Happens Next

#artificialintelligence

MIT's Computer Science and Artificial Intelligence Laboratory created an algorithm that utilizes deep learning, which enables artificial intelligence (AI) to use patterns of human interaction to predict what will happen next. Researchers fed the program with videos featuring human social interactions and tested it to see if it "learned" well enough to be able to predict them. While this lineup may seem questionable, MIT doctoral candidate and project researcher Carl Vondrick reasons out that accessibility and realism were part of the criteria. "We just wanted to use random videos from YouTube," Vondrick said. "The reason for television is that it's easy for us to get access to that data, and it's somewhat realistic in terms of describing everyday situations."


r/CompressiveSensing - Is anyone aware of methods to "pre-correlate" two signals so you can send a sparser representation around?

#artificialintelligence

I often compute ambiguity functions, which end up being very sparse (often a single non-noise bin) after correlation. It'd be great if I could somehow take the two inputs, A and B, and do something to get a sparser representation A' and B' that I could then transport over my network to a central correlation server to get the final ambiguity surface. Does anyone know of any work in that direction?


Facebook research automatically creates an avatar from a photo

#artificialintelligence

Who's got time for it?! Computers, that's who. You'll never have to waste another second selecting your hair style, skin tone, or facial hair length if this research from Facebook finds its way into product form.


Facebook research automatically creates an avatar from a photo

#artificialintelligence

Who's got time for it?! Computers, that's who. You'll never have to waste another second selecting your hair style, skin tone or facial hair length if this research from Facebook finds its way into product form.