Goto

Collaborating Authors

Computer Theater: Stage for Action Understanding

AAAI Conferences

Action is the basis of theater 1 and, as such, needs to be fully incorporated in whatever model a computer is running during a computer-based theatrical performance. We believe the lack of good models for action is one fundamental reason for the relative absence of experiments involving theater and computers. The attempts to wire up stages or performers have been in general concerned with dance (Lovell Mitchell 1995), only using information about the position and attitude of the actors/dancers on the stage. The main argument of this paper is that computer theater not only requires action representation and recognition but it is also an interesting domain for action research. To support our argument we begin by examining the multiple possibilities of using computers in theatrical performances, concerning both explored and unexplored developments. Recent theatrical experiences are prefered for citation rather than old ones in order to draw a picture of the current research.


An AI Watched 600 Hours of TV and Started to Accurately Predict What Happens Next

#artificialintelligence

MIT's Computer Science and Artificial Intelligence Laboratory created an algorithm that utilizes deep learning, which enables artificial intelligence (AI) to use patterns of human interaction to predict what will happen next. Researchers fed the program with videos featuring human social interactions and tested it to see if it "learned" well enough to be able to predict them. While this lineup may seem questionable, MIT doctoral candidate and project researcher Carl Vondrick reasons out that accessibility and realism were part of the criteria. "We just wanted to use random videos from YouTube," Vondrick said. "The reason for television is that it's easy for us to get access to that data, and it's somewhat realistic in terms of describing everyday situations."


Folsom-Kovarik

AAAI Conferences

Training scenarios, games, and learning environments often use narrative to manipulate motivation, priming, decision context, or other aspects of effective training. Computational representations of scenario narrative are useful for computer planning and real-time tailoring of training content, but typically define how to display narrative in the scenario world. The training rationales and the impacts of narrative on trainees are not typically accessible to the computer. We describe a computational representation that lets instructors explicitly author the training goals and impacts in a narrative.


r/CompressiveSensing - Is anyone aware of methods to "pre-correlate" two signals so you can send a sparser representation around?

#artificialintelligence

I often compute ambiguity functions, which end up being very sparse (often a single non-noise bin) after correlation. It'd be great if I could somehow take the two inputs, A and B, and do something to get a sparser representation A' and B' that I could then transport over my network to a central correlation server to get the final ambiguity surface. Does anyone know of any work in that direction?


Yu

AAAI Conferences

This paper proposes a new representation to explain and predict popularity evolution in social media. Recent work on social networks has led to insights about the popularity of a digital item. For example, both the content and the network matters, and gaining early popularity is critical. However, these observations did not paint a full picture of popularity evolution; some open questions include: what kind of popularity trends exist among different types of videos, and will an unpopular video become popular? To this end, we propose a novel phase representation that extends the well-known endogenous growth and exogenous shock model (Crane and Sornette 2008). We further propose efficient algorithms to simultaneously estimate and segment power-law shaped phases from historical popularity data. With the extracted phases, we found that videos go through not one, but multiple stages of popularity increase or decrease over many months. On a dataset containing the 2-year history of over 172,000 YouTube videos, we observe that phases are directly related to content type and popularity change, e.g., nearly 3/4 of the top 5% popular videos have 3 or more phases, more than 60% news videos are dominated by one long power-law decay, and 75% of videos that made a significant jump to become the most popular videos have been in increasing phases. Finally, we leverage this phase representation to predict future viewcount gain and found that using phase information reduces the average prediction error over the state-of-the-art for videos of all phase shapes.