Goto

Collaborating Authors

 airtime


AI analysis of segments on CNN, Fox News and MSNBC shows females get less airtime

Daily Mail - Science & tech

Artificial intelligence has found disparities in the amount of airtime women and men were given on CNN, FOX News and MSNBC - females had a 10 percent less chance of speaking during political discussions because male speakers constantly interrupted them. The discovery was made by researchers at Rochester Institute of Technology who analyzed 625,409 dialogues hosted on the three news cable networks from January 2000 through July 2021. The technology revealed women received an average of 72.8 words per chance to speak compared to 81.4 for male speakers and women were interrupted 39.4 percent of the time during discussions - this is compared to the 35.9 percent of the time for men. The team believes their AI could be used during talk shows, interviews and political debates to identify a serial interrupter in real-time, but the study also reinforces previous research that found men interrupt women more to show their dominance. AI analyzed thousands of dialogues from news segments on the three networks and found woman are given a 10 percent less chance at speaking because men interrupt them.


Dynamic Sampling Of Video To Imagery For Deep Learning

#artificialintelligence

While today's deep learning systems are able to natively analyze video, the large file sizes of high resolution movies present unique challenges in terms of storage space and computational requirements. Sampling them into sequences of still images not only allows for real-time processing of unlimited-length videos but opens the door for creative new applications like "video ngrams." The most straightforward way to sample a video into a sequence of still images is to use a fixed-rate time-based mechanism such as one frame per second. This kind of sampling is supported natively by most tools like ffmpeg and provides a simplistic and robust workflow. At the same time, it is highly inefficient, especially for videos where there is a lot of repetition. In the case of television news, a considerable portion of the airtime is devoted to motionless anchors sitting in an unchanging studio, meaning there can be quite literally thousands of nearly identical frames in a single broadcast.


Using Google's Speech Recognition And Natural Language APIs To Thematically Analyze Television

#artificialintelligence

Television news coverage is typically thought of as a visual medium, yet most of the narrative we consume from television comes in the form of spoken narration. Watching a news show with the audio muted and closed captioning off reinforces that the visual elements of television act more as enrichment than primary information conveyor. This means that quantifying this spoken narrative is imperative to understanding what television news is paying attention to and how it is framing and covering those events. Using Google's Cloud Speech-to-Text API to transcribe a week of television news coverage and annotating it with Google's Natural Language API, what might we learn about how television news covers the world? In the United States, most television stations provide closed captioning for their news programming, meaning they already come with a textual human-produced transcript.


Comparing Google's AI Speech Recognition To Human Captioning For Television News

#artificialintelligence

Most television stations still rely on human transcription to generate the closed captioning for their live broadcasts. Yet even with the benefit of human fluency, this captioning can vary wildly in quality, even within the same broadcast, from a nearly flawless rendition to near-gibberish. At the same time, automatic speech recognition has historically struggled to achieve sufficient accuracy to entirely replace human transcription. Using a week of television news from the Internet Archive's Television News Archive, how does the station-provided primarily human-created closed captioning compare with machine-generated transcripts generated by Google's Cloud Speech-to-Text API? Automated high-quality captioning of live video represents one of the holy grails of machine speech recognition. While machine captioning systems have improved dramatically over the years, there has still been a substantial gap holding them back from fully matching human accuracy.


Using Google Vision AI's Reverse Image Search To Richly Catalog Television News

#artificialintelligence

Deep learning has revolutionized the machine understanding of imagery. Yet today's image recognition models are still limited by the availability of large annotated training datasets upon which to build their libraries of recognized objects and activities. To address this, Google's Vision AI API expands its native catalog of around 10,000 visually recognized objects and activities with the ability to perform the equivalent of a reverse Google Images search across the open Web and tally up the top topics used to caption the given image everywhere it has previously appeared, lending unprecedentedly rich context and understanding, even yielding unique labels for breaking news events. What might this process yield for a week of television news? Google's Vision AI API represents a unique hybrid between traditional deep learning-based image labeling based on a library of previously trained models and the ability to leverage the open Web to annotate images based on the most common topics visually similar images are captioned with. Using its Web Entities feature, the Vision AI API performs what amounts to a reverse Google Images search over the open Web, identifying images across the entire Web that look most similar to the given image.


APIs and open banking win most airtime at Sibos 2016

#artificialintelligence

Discussions during Swift's annual Sibos conference in Geneva were dominated by talk of open banking and open APIs, which create both opportunities and challenges for banks. This is the first of five key takeaways from the event explored in Finextra's review of Sibos 2016, produced in association with SAP, and published today. It was clear from Finextra's interviews with industry leaders during the event as well as from the discussions on the main stages of the conference that banks increasingly understand open banking, underpinned by open APIs, is an opportunity for them. However, as the new report details, it was also clear that banks face a challenge in ensuring their systems can handle open APIs. Open banking will be a spur to further collaboration between banks and fintechs – as will a number of other key drivers shaping the industry currently, as the report examines.