This year's Oscar hopefuls include some names you might not expect among the nominees: Respawn Entertainment and Oculus Studios. No, the publisher of video games "Titanfall" and "Apex Legends," and Facebook's virtual reality studio are not nominated for best visual effects. "Colette," a short film the companies co-produced along with the U.K. media outlet The Guardian, is nominated for best short documentary at the Academy Awards, being doled out Sunday. The 25-minute film tells the story of Colette Marin-Catherine, 92, who was part of the French resistance during World War II. Along with French history student Lucie Fouble, she visits the Mittelbau-Dora concentration camp in Germany where her brother Jean-Pierre had died in 1945, three weeks before allied troops freed prisoners there.
Video is on an exponential growth trajectory, and it's not just Netflix originals and HBO docs and new films on Amazon Prime. In today's world, when people aren't eating or sleeping (or perhaps even when they are), they are likely viewing a video. Each day, people watch over 1 billion hours of YouTube. Creating and delivering movies, news and other compelling visual content is no longer just for the Hollywood elite. In fact, some of today's most prolific storytellers are doing so with little resources and amateur tools.
The global spending on the artificial intelligence (AI) market is also estimated to reach $118.6 billion by 2025. A Business Wire research unveiled that the amount spent on cloud AI in the media and entertainment (M & E) industry is anticipated to reach $1,860.9 million by 2025 from $329 million in 2019. The worldwide AI market adoption rate is estimated to reach $118.6 billion by 2025 [source: www.statista.com] Here are some of the examples of how AI is changing the media landscape. The AI market for social media is estimated to reach 3,714.89 million at 28.77% CAGR by 2025.
Most conversational recommendation approaches are either not explainable, or they require external user's knowledge for explaining or their explanations cannot be applied in real time due to computational limitations. In this work, we present a real time category based conversational recommendation approach, which can provide concise explanations without prior user knowledge being required. We first perform an explainable user model in the form of preferences over the items' categories, and then use the category preferences to recommend items. The user model is performed by applying a BERT-based neural architecture on the conversation. Then, we translate the user model into item recommendation scores using a Feed Forward Network. User preferences during the conversation in our approach are represented by category vectors which are directly interpretable. The experimental results on the real conversational recommendation dataset ReDial  demonstrate comparable performance to the state-of-the-art, while our approach is explainable. We also show the potential power of our framework by involving an oracle setting of category preference prediction. Keywords: Conversational Recommendation · Category Preference Based Recommendation · Explainable Conversational Recommendation · Cold Start Explainable Recommendation.
When your 87-second short film prompts the director of Toy Story 3 to tweet praise calling it "one of the most amazing things I've ever seen," you know you've done something right. Filmed at Bryant Lake Bowl and Theatre in Minneapolis with one continuous drone shot, Jay Christensen's Right Up Our Alley is a stunning short film that's essentially a high-speed tour of a regular night at a bowling alley. At the time of writing, it's clocked up over 6.1 million views on Twitter and 660,000 views on YouTube and caught the attention of Hollywood star Elijah Wood and Guardians of the Galaxy director James Gunn, who both had similarly enthusiastic things to say about it. In terms of how it was made, Christensen confirmed on Instagram that the sound was added separately later. It's worth noting that Christensen has previous form when it comes to single-shot drone films -- you can watch his previous shorts, including one filmed at a movie theatre and one that follows a motorbike rider through an empty mall, on his YouTube channel.
To address the long-standing data sparsity problem in recommender systems (RSs), cross-domain recommendation (CDR) has been proposed to leverage the relatively richer information from a richer domain to improve the recommendation performance in a sparser domain. Although CDR has been extensively studied in recent years, there is a lack of a systematic review of the existing CDR approaches. To fill this gap, in this paper, we provide a comprehensive review of existing CDR approaches, including challenges, research progress, and future directions. Specifically, we first summarize existing CDR approaches into four types, including single-target CDR, multi-domain recommendation, dual-target CDR, and multi-target CDR. We then present the definitions and challenges of these CDR approaches. Next, we propose a full-view categorization and new taxonomies on these approaches and report their research progress in detail. In the end, we share several promising research directions in CDR.
Deep Nostalgia, a new service from the genealogy site MyHeritage that animates old family photos, has gone viral on social media, in another example of how AI-based image manipulation is becoming increasingly mainstream. Launched in late February, the service uses an AI technique called deep learning to automatically animate faces in photos uploaded to the system. Because of its ease of use, and free trial, it soon took off on Twitter, where users uploading animated versions of old family photos, celebrity pictures, and even drawings and illustrations. "It makes me so happy to see him smile again!" Try our new #DeepNostalgia #PhotoAnimation feature for yourself and prepare to be AMAZED!!! https://t.co/p3h600G3MX
Creating a movie trailer takes time, and most broadcasters and streaming platforms don't have enough resources to do it. Their creative team, responsible for putting together promotional material for digital and social media, spends very little time being creative. To produce a 30" rough edit for a 10-movie stunt is about 5 days of viewing and logging. They also use and manage external agencies, leading to bottlenecks where one department's output is heavily prioritized over another's. The whole process is onerous, time consuming, and inefficient. PromoMii, a UK startup, solves this problem with a unique blend of domain expertise and machine learning (ML). Their product Nova provides functionality to search for scenes or specific dialogues across their library. Productivity is supercharged with template queries, enabling creatives to finish their spot in minutes in terms of days. Nordic Entertainment Group, one of PromoMii's customers, found that it was 10 times cheaper and 20 times faster to create trailers with Nova. A promotion which would usually take two days to produce was completed within 2 hours. This blog post is the first in a series of startup ML stories, where we tell stories like PromoMii's in terms of three crucial ingredients to building a successful business with ML – team, product, and partnership. PromoMii was founded by two Danes from Copenhagen to help large broadcasters promote their shows. Over time and working backwards from their customers, the company pivoted toward using Artificial Intelligence (AI) to enable creatives to be creative. The technological challenge inspired Tigran Mnatskanyan, CTO, to join PromoMii with a mission of building a great engineering team and crafting the content creation platform of the future. In terms of domain expertise, PromoMii's Chairman is Lester Mordue, an award-winning creative director bringing experience from MTV, Sky, Disney, and Discovery. As a creative himself, Lester immediately saw the benefits of Nova and is in a unique position to open doors for the business and provide guidance on product-market fit. "In my career, I've sat in boardrooms looking at tech and marketing ROI as well as sitting in edit suites looking for inspiration and story hooks," said Lester. "Viewers enjoy on-demand services and streaming platforms, and so too should marketeers who help make viewing decisions.
Models for question answering, dialogue agents, and summarization often interpret the meaning of a sentence in a rich context and use that meaning in a new context. Taking excerpts of text can be problematic, as key pieces may not be explicit in a local window. We isolate and define the problem of sentence decontextualization: taking a sentence together with its context and rewriting it to be interpretable out of context, while preserving its meaning. We describe an annotation procedure, collect data on the Wikipedia corpus, and use the data to train models to automatically decontextualize sentences. We present preliminary studies that show the value of sentence decontextualization in a user facing task, and as preprocessing for systems that perform document understanding. We argue that decontextualization is an important subtask in many downstream applications, and that the definitions and resources provided can benefit tasks that operate on sentences that occur in a richer context.
Apps have taken over dating. Gone is the stigma of using a service like Match.com or OKCupid to find a partner -- nowadays, finding someone via Tinder, Bumble or Hinge is the norm. Swiping mindlessly through potential lovers is so common we now do it whether we're alone or hanging out with friends or even during another date. If you've ever sat down with a friend and asked to go through people on a dating app with them, Searchers is a film for you. If you're one of the lucky people who have never had to use a dating app and are curious about the experience, Searchers is for you.