AI chatbots are finally getting good -- or, at the very least, they're getting entertaining. Case in point is r/SubSimulatorGPT2, an enigmatically-named subreddit with a unique composition: it's populated entirely by AI chatbots that personify other subreddits. Well, in order to create a chatbot you start by feeding it training data. Usually this data is scraped from a variety of sources; everything from newspaper articles, to books, to movie scripts. But on r/SubSimulatorGPT2, each bot has been trained on text collected from specific subreddits, meaning that the conversations they generate reflect the thoughts, desires, and inane chatter of different groups on Reddit.
I also did some experimentation with GRUs and LSTMs in NLP context, where I saw LSTMs performing better than GRUs, while they need more training time. Honestly, I never tried complete variable length sequences, because of the restriction, that each batch must be the same length and some layers are not usable if you have variable sequences. I don't think the difference will be huge, at least in my data. I experimented with different sequence lengths (100, 200, 250, 400, 500), and 400 and 500 have not performed better then 250. I did indeed achieve a noticeable performance improvement with embeddings, instead of one hot encoding.
How are you using Watson in your business? We wanted to improve the candidate experience by creating interactions with job seekers visiting our career site, as well as increase the number of applications we receive for hard-to-fill roles. Watson Candidate Assistant answers general questions about working at NBCUniversal, and it recommends jobs based on keyword matching between openings and the job seeker's resume. Candidates using a traditional job search may look by functional areas or job titles, but that might not match our company's vernacular. We can now drive candidates to roles they might not have found.
Television news coverage is typically thought of as a visual medium, yet most of the narrative we consume from television comes in the form of spoken narration. Watching a news show with the audio muted and closed captioning off reinforces that the visual elements of television act more as enrichment than primary information conveyor. This means that quantifying this spoken narrative is imperative to understanding what television news is paying attention to and how it is framing and covering those events. Using Google's Cloud Speech-to-Text API to transcribe a week of television news coverage and annotating it with Google's Natural Language API, what might we learn about how television news covers the world? In the United States, most television stations provide closed captioning for their news programming, meaning they already come with a textual human-produced transcript.
Aerospace & Defense Industry to See Greatest Impact from Artificial Intelligence Compared to Other Key Emerging Technologies, Accenture Report Finds Study underscores the need for reskilling in the sector for future competitiveness NEW YORK; June 13, 2019 – The aerospace and defense (A&D) industry will be more affected by artificial intelligence (AI) than by any other major emerging technology over the next three years, according to Aerospace & Defense Technology Vision 2019, the annual report from Accenture (NYSE: ACN) that predicts key technology trends likely to redefine business. The study also underscores the growing importance of reskilling programs as a competitive lever. AI, comprising technologies that range from machine learning to natural language processing, enables machines to sense, comprehend, act and learn in order to extend human capabilities. One-third (33%) of A&D executives surveyed cited AI as the technology that will have the greatest impact on their organization over the next three years -- more than quantum computing, distributed ledger or extended reality. In fact, two-thirds (67%) of A&D executives said they have either adopted AI within their business or are piloting the technology.
Graph Neural Network has become the new fashion in many graph-based learning problems. As the team behind this library, we want to share with you the new release of DGL (v0.3) that is much faster (up to 19x faster) and more scalable for training GNNs on large graphs (up to 8x larger). For whom have never heard of DGL or Graph Neural Network, maybe it is worth to take a look at this new trend of geometric deep learning. Checkout more about how a variety of models can be unified under the message passing framework and can be implemented in DGL (https://docs.dgl.ai/tutorials/models/index.html). Our project site: https://www.dgl.ai/ .
With FMX 2019 just finishing up May 3 in Stuttgart, Germany, we're pleased to present our latest set of exclusive interviews with some of the talented speakers who presented over the packed four-day conference, a group of top industry professionals who converge on the event to lecture, teach and network. Watch and enjoy our insightful and entertaining AWN @ FMX 2019 Professional Spotlight video series featuring some of the biggest names in animation, visual effects, computer graphics and transmedia. Stay tuned as we add dozens of brand new interviews over the coming weeks. You can also check out last year's AWN - FMX 2018 interviews, our set from FMX 2017 as well as the complete collection of AWN - FMX Professional Spotlight videos. You can also check out AWN's great FMX 2019 coverage on our FMX Conference Spotlight blog.
We know content should be valuable, comprehensive, new, relevant, and accurate. Even more fundamentally important than all of those things is that it needs to be authentic. People trust factual information presented with sincere intentions. This era of "fake news" has ushered in a lot of fear for this very reason; we have had to fight to build the authority of our pages and domains to signal that we are worthy of trust. But our industry has yet to face its biggest challenge.
Driven by the rise of transformative digital technologies and the proliferation of data, human storytelling is rapidly evolving in ways that challenge and expand our very understanding of narrative. Transmedia -- where stories and data operate across multiple platforms and social transformations -- and its wide range of theoretical, philosophical, and creative perspectives, needs shared critique around making and understanding. MIT's School of Architecture and Planning (SA P), working closely with faculty in the MIT School of Humanities, Arts, and Social Sciences (SHASS) and others across the Institute, has launched the Transmedia Storytelling Initiative under the direction of Professor Caroline Jones, an art historian, critic, and curator in the History, Theory, Criticism section of SA P's Department of Architecture. The initiative will build on MIT's bold tradition of art education, research, production, and innovation in media-based storytelling, from film through augmented reality. Supported by a foundational gift from David and Nina Fialkow, this initiative will create an influential hub for pedagogy and research in time-based media.