Goto

Collaborating Authors

Results


Keanu Reeves' 'BRZRKR' Being Made Into Live-Action Movie And Anime Series, Netflix Announces

International Business Times

Keanu Reeves' comic book series "BRZRKR" will soon be turned into an anime series and live-action movie. NXOnNetflix, Netflix's "home of all things geek," tweeted Monday to announce that the epic saga will come to life on its streaming platform. Netflix is developing a live action film AND follow-up anime series based on Keanu Reeves' BRZRKR, a brutally epic saga about an immortal warrior's 80,000 year fight through the ages. Reeves will produce and star in the film, and voice the anime," NXOnNetflix said in its post. Reeves will star in and produce the live-action movie and anime series. "BRZRKR" is originally penned by Reeves and co-written by New York Times bestselling writer Matt Kindt. It is illustrated by Marvel artist Ron Garney, with colors by Bill Crabtree. "The man known only as Berzerker is half-mortal and half-God, cursed and compelled to violence...even at the sacrifice of his sanity," its Google Books description reads. "But after wandering the world for centuries, Berzerker may have finally found a refuge – working for the U.S. government to fight the battles too violent and too dangerous for anyone else.


When Hackers Were Heroes

Communications of the ACM

Forty years ago, the word "hacker" was little known. Its march from obscurity to newspaper headlines owes a great deal to tech journalist Steven Levy, who in 1984 defied the advice of his publisher to call his first book Hackers: Heroes of the Computer Revolution.11 Hackers were a subculture of computer enthusiasts for whom programming was a vocation and playing around with computers constituted a lifestyle. Hackers was published only three years after Tracy Kidder's The Soul of a New Machine, explored in my last column (January 2021, p. 32–37), but a lot had changed during the interval. Kidder's assumed readers had never seen a minicomputer, still less designed one. By 1984, in contrast, the computer geek was a prominent part of popular culture. Unlike Kidder, Levy had to make people reconsider what they thought they already knew. Computers were suddenly everywhere, but they remained unfamiliar enough to inspire a host of popular books to ponder the personal and social transformations triggered by the microchip. The short-lived home computer boom had brought computer programming into the living rooms and basements of millions of middle-class Americans, sparking warnings about the perils of computer addiction. A satirical guide, published the same year, warned of "micromania."15 The year before, the film Wargames suggested computer-obsessed youth might accidentally trigger nuclear war.


All NLP Tasks Are Generation Tasks: A General Pretraining Framework

arXiv.org Artificial Intelligence

There have been various types of pretraining architectures including autoregressive models (e.g., GPT), autoencoding models (e.g., BERT), and encoder-decoder models (e.g., T5). On the other hand, NLP tasks are different in nature, with three main categories being classification, unconditional generation, and conditional generation. However, none of the pretraining frameworks performs the best for all tasks, which introduces inconvenience for model development and selection. We propose a novel pretraining framework GLM (General Language Model) to address this challenge. Compared to previous work, our architecture has three major benefits: (1) it performs well on classification, unconditional generation, and conditional generation tasks with one single pretrained model; (2) it outperforms BERT-like models on classification due to improved pretrain-finetune consistency; (3) it naturally handles variable-length blank filling which is crucial for many downstream tasks. Empirically, GLM substantially outperforms BERT on the SuperGLUE natural language understanding benchmark with the same amount of pre-training data. Moreover, GLM with 1.25x parameters of BERT-Large achieves the best performance in NLU, conditional and unconditional generation at the same time, which demonstrates its generalizability to different downstream tasks.


Coded Bias: New PBS Documentary Explores Gender & Racial Bias in AI

#artificialintelligence

An upcoming PBS documentary dives deep into the controversy surrounding bias in artificial intelligence (AI). Coded Bias explores MIT Media Lab researcher Joy Buolamwini's shocking discovery that facial recognition does not see women and dark-skinned faces accurately. The 90-minute film covers her push for U.S. government legislation against bias in algorithms that are becoming increasingly prevalent in modern-day society. Directed by award-winning filmmaker Shalini Kantayya, Coded Bias will premiere on PBS and PBS video app on March 22. Kantayya tells the story of dynamic women leading the fight for the ethical use of AI. She profiles data scientists, mathematicians, ethicists, and everyday citizens from around the world who have been impacted by these disruptive technologies and are fighting to shed light on the impact of unconscious bias in artificial intelligence.


We Live in the World of "WandaVision"

The New Yorker

If--like Wanda Maximoff--you've been living in your own reality, distant from all things in 2021, you may not have heard about "WandaVision," whose first and only season ended on March 5th. The immensely popular show, from Disney and Marvel Studios, follows Wanda, a.k.a. the Scarlet Witch, an Eastern European refugee with "chaos magic" powers, and her husband Vision, a synthezoid (android) who died in the events of the Marvel movie "Avengers: Infinity War." Nearly all nine episodes of "WandaVision" depict the pair in what appears to be domestic suburban bliss. Nearly all take plots and visual style from one of the sitcoms that Wanda watched for solace during her bleak wartime youth, from the black and white of "The Dick Van Dyke Show" to the faux-reality vibe of "The Office." These anachronistic, self-contained sitcom scenarios fall apart as people from the outside world break in.


Topical Language Generation using Transformers

arXiv.org Artificial Intelligence

Large-scale transformer-based language models (LMs) demonstrate impressive capabilities in open text generation. However, controlling the generated text's properties such as the topic, style, and sentiment is challenging and often requires significant changes to the model architecture or retraining and fine-tuning the model on new supervised data. This paper presents a novel approach for Topical Language Generation (TLG) by combining a pre-trained LM with topic modeling information. We cast the problem using Bayesian probability formulation with topic probabilities as a prior, LM probabilities as the likelihood, and topical language generation probability as the posterior. In learning the model, we derive the topic probability distribution from the user-provided document's natural structure. Furthermore, we extend our model by introducing new parameters and functions to influence the quantity of the topical features presented in the generated text. This feature would allow us to easily control the topical properties of the generated text. Our experimental results demonstrate that our model outperforms the state-of-the-art results on coherency, diversity, and fluency while being faster in decoding.


The AI Index 2021 Annual Report

arXiv.org Artificial Intelligence

Welcome to the fourth edition of the AI Index Report. This year we significantly expanded the amount of data available in the report, worked with a broader set of external organizations to calibrate our data, and deepened our connections with the Stanford Institute for Human-Centered Artificial Intelligence (HAI). The AI Index Report tracks, collates, distills, and visualizes data related to artificial intelligence. Its mission is to provide unbiased, rigorously vetted, and globally sourced data for policymakers, researchers, executives, journalists, and the general public to develop intuitions about the complex field of AI. The report aims to be the most credible and authoritative source for data and insights about AI in the world.


Here are 5 Mars-themed movies worth watching tonight

USATODAY - Tech Top Stories

Purchases you make through our links may earn us a commission. Countless Americans watched with awe as NASA's latest robotic explorer, the Perseverance rover, landed safely on Mars earlier today. While you might not be on a mission to Mars personally, you can still make a sojourn to the Red Planet without ever ditching your sweats (or more importantly, leaving the couch), courtesy of these five movies streaming now on major platforms like Hulu and Disney . From green-suited space invaders firing ray guns to survival sagas pitting man against an inhospitable wilderness, these movies about Mars are an absolute must-watch for anyone who's still got space travel on the mind tonight. If Martians actually landed on Earth, would they be friends... or foes?


Patterns, predictions, and actions: A story about machine learning

arXiv.org Machine Learning

This graduate textbook on machine learning tells a story of how patterns in data support predictions and consequential actions. Starting with the foundations of decision making, we cover representation, optimization, and generalization as the constituents of supervised learning. A chapter on datasets as benchmarks examines their histories and scientific bases. Self-contained introductions to causality, the practice of causal inference, sequential decision making, and reinforcement learning equip the reader with concepts and tools to reason about actions and their consequences. Throughout, the text discusses historical context and societal impact. We invite readers from all backgrounds; some experience with probability, calculus, and linear algebra suffices.


Natural Language Processing

#artificialintelligence

Chapter 5 of this free 15 chapter AI handbook provides an overview of natural language processing.