Is there a formula for writing a hit musical? And if so, could a computer do it? That's one of the questions the team behind the new musical Beyond the Fence hopes to answer. The musical, making its debut in London in February, was written almost entirely by computers. CBC technology columnist Dan Misener answers some questions on what this means for the intersection of science and art.
Blogging birds is a novel artificial intelligence program that generates creative texts to communicate telemetric data derived from satellite tags fitted to red kites -- a medium-size bird of prey -- as part of a species reintroduction program in the U.K. We address the challenge of communicating telemetric sensor data in real time by enriching it with meteorological and cartographic data, codifying ecological knowledge to allow creative interpretation of the behavior of individual birds in respect to such enriched data, and dynamically generating informative and engaging data-driven blogs aimed at the general public. Geospatial data is ubiquitous in today's world, with vast quantities of telemetric data collected by GPS receivers on, for example, smartphones and automotive black boxes. Adoption of telemetry has been particularly striking in the ecological realm, where the widespread use of satellite tags has greatly advanced our understanding of the natural world.14,23 Despite its increasing popularity, GPS telemetry involves the important shortcoming that both the handling and the interpretation of often large amounts of location data is time consuming and thus done mostly long after the data has been gathered.10,24 This hampers fruitful use of the data in nature conservation where immediate data analysis and interpretation are needed to take action or communicate to a wider audience.25,26 The widespread availability of GPS data, along with associated difficulties interpreting and communicating it in real time, mirrors the scenario seen with other forms of numeric or structured data. It should be noted that the use of computational methods for data analysis per se is hardly new; much of science depends on statistical analysis and associated visualization tools. However, it is generally understood that such tools are mediated by human operators who take responsibility for identifying patterns in data, as well as communicating them accurately.
What if you could mix and match different tracks from your favorite artists, or create new ones on your own with their voices? This could become a reality sooner than later, as AI models similar to the ones used to create computer-generated art images and embed deepfakes in videos are being increasingly applied to music. The use of algorithms to create music is not new. Researchers used computer programs to generate piano sheet music as far back as the 1950s, and musicians from that era such as Iannis Xenakis and Gottfried Koenig even used them to compose their own music. What has changed are the improvements in generative algorithms, which first gained popularity back in 2014, coupled with large amounts of compute power that are increasingly changing what computers can do with music today.
Video games have been entertaining us for nearly 30 years, ever since Pong was introduced to arcades in the early 1970s. Computer graphics have become much more sophisticated since then, and game graphics are pushing the barriers of photorealism. Now, researchers and engineers are pulling graphics out of your television screen or computer display and integrating them into real-world environments. This new technology, called augmented reality, blurs the line between what's real and what's computer-generated by enhancing what we see, hear, feel and smell. On the spectrum between virtual reality, which creates immersive, computer-generated environments, and the real world, augmented reality is closer to the real world.
This paper presents work using crowdsourcing to assess explanations for supervised text classification. In this paper, an explanation is defined to be a set of words from the input text that a classifier or human believes to be most useful for making a classification decision. We compared two types of explanations for classification decisions: human-generated and computer-generated. The comparison is based on whether the type of the explanation was identifiable and on which type of explanation was preferred. Crowdsourcing was used to collect two types of data for these experiments. First, human-generated explanations were collected by having users select an appropriate category for a piece of text and highlight words that best support this category. Second, users were asked to compare human- and computer-generated explanations and indicate which they preferred and why. The crowdsourced data used for this paper was collected primarily via Amazon’s Mechanical Turk, using several quality control methods. We found that in one test corpus, the two explanation types were virtually indistinguishable, and that participants did not have a significant preference for one type over another. For another corpus, the explanations were slightly more distinguishable, and participants preferred the computer-generated explanations at a small, but statistically significant, level. We conclude that computer-generated explanations for text classification can be comparable in quality to human-generated explanations.