Collaborating Authors


Amazon's Echo Show 10 is available for preorder

USATODAY - Tech Top Stories

Amazon's new Echo Show 10 (third-generation), the first Alexa device with a motorized swiveling display, is now available for preorder for delivery starting on February 25. The new model, which was announced in September 2020, retails for $249.99 and is available in two colors: charcoal and glacier white. With the Echo Show 10, you can stream your favorite shows, follow along with recipes, call your friends and family, and more. The main draw for the Echo Show 10 is its smart motion. The touch-enabled display and embedded camera rotates atop a round base, which allows the built-in smart technology to keep the camera and screen in your line of sight automatically.

A Distributional Approach to Controlled Text Generation Artificial Intelligence

We propose a Distributional Approach to address Controlled Text Generation from pre-trained Language Models (LMs). This view permits to define, in a single formal framework, "pointwise" and "distributional" constraints over the target LM -- to our knowledge, this is the first approach with such generality -- while minimizing KL divergence with the initial LM distribution. The optimal target distribution is then uniquely determined as an explicit EBM (Energy-Based Model) representation. From that optimal representation we then train the target controlled autoregressive LM through an adaptive distributional variant of Policy Gradient. We conduct a first set of experiments over pointwise constraints showing the advantages of our approach over a set of baselines, in terms of obtaining a controlled LM balancing constraint satisfaction with divergence from the initial LM (GPT-2). We then perform experiments over distributional constraints, a unique feature of our approach, demonstrating its potential as a remedy to the problem of Bias in Language Models. Through an ablation study we show the effectiveness of our adaptive technique for obtaining faster convergence.

What is machine learning? Here's what you need to know about the branch of artificial intelligence and its common applications


Machine learning is a fast-growing and successful branch of artificial intelligence. In essence, machine learning is the process of allowing a computer system to teach itself how to perform complex tasks by analyzing large sets of data, rather than being explicitly programmed with a particular algorithm or solution. In this way, machine learning enables a computer to learn how to perform a task on its own and to continue to optimize its approach over time, without direct human input. In other words, it's the computer that is creating the algorithm, not the programmers, and often these algorithms are sufficiently complicated that programmers can't explain how the computer is solving the problem. Humans can't trace the computer's logic from beginning to end; they can only determine if it's finding the right solution to the assigned problem, which is output as a "prediction."

Today on Technology: Your Online Guidebook on Digital Transformation


"Today each organization must know how to build its digital capability. Because now every company is a software company, every organization is a digital organization." Recently, an article published by the Harvard Business Review gave holistic advice on how in terms of a technology renaissance, we ought to not forget our humanistic side. A very unconventional beginning to a write-up which will solely speak about the whole nine yards of tech, but since digital transformation services are about bringing change to the existing reality, it'll cease to exist sans a touch of humanism. The latter half of the 20th century was the genesis of the'Age of Information' where progression was made from orthodox industrial techniques to the forever evolving Information and Technology. From analogue, everything turned digital. Let's understand it layer by layer. In simple terms, Digital transformation is the impact and influence of technology into each and every business vertical. And when we say technology, we mean digital. But it doesn't restrict itself to that. It's equally a colossal cultural change that thrives on experimentation, brainstorming, challenging metacognitive skills and coping with failure.

GPT-3 Creative Fiction


What if I told a story here, how would that story start?" Thus, the summarization prompt: "My second grader asked me what this passage means: …" When a given prompt isn't working and GPT-3 keeps pivoting into other modes of completion, that may mean that one hasn't constrained it enough by imitating a correct output, and one needs to go further; writing the first few words or sentence of the target output may be necessary.

Efficient Model-Based Reinforcement Learning through Optimistic Policy Search and Planning Machine Learning

Model-based reinforcement learning algorithms with probabilistic dynamical models are amongst the most data-efficient learning methods. This is often attributed to their ability to distinguish between epistemic and aleatoric uncertainty. However, while most algorithms distinguish these two uncertainties for {\em learning} the model, they ignore it when {\em optimizing} the policy. In this paper, we show that ignoring the epistemic uncertainty leads to greedy algorithms that do not explore sufficiently. In turn, we propose a {\em practical optimistic-exploration algorithm} (\alg), which enlarges the input space with {\em hallucinated} inputs that can exert as much control as the {\em epistemic} uncertainty in the model affords. We analyze this setting and construct a general regret bound for well-calibrated models, which is provably sublinear in the case of Gaussian Process models. Based on this theoretical foundation, we show how optimistic exploration can be easily combined with state-of-the-art reinforcement learning algorithms and different probabilistic models. Our experiments demonstrate that optimistic exploration significantly speeds up learning when there are penalties on actions, a setting that is notoriously difficult for existing model-based reinforcement learning algorithms.

[Online] AI/Machine Learning for beginners


This is a 1-week/10 hours long, part-time and instructor-led training offered in evening time (New York Timezone) by, a San Francisco based technology company. This training program is built based on 6FS team's years of experience in building large-scale solutions using various various Big Data and AI/ML technologies. This is not a book-based training, rather a hands-on, interactive experience app building apps using AI/ML, delivered by experienced startup CTOs. While learning basic concepts like Python, Jupyter notebooks, and training models and human powered labeling, you'll also learn practical problems and solutions, based on how Dean and Adrian built technology stacks in their previous startups. Let's build a project to gather data from human labeling service like AWS Sage maker GroundTruth.

What Is Artificial Intelligence? #1 Season 1 Episode 08/13/2019 Crash Course: Artificial Intelligence


You've just tried to add this video to your Watchlist so you can watch it later. But first, we need you to sign in to PBS using one of the services below. You'll be able to manage videos in your Watchlist, keep track of your favorite shows, watch PBS in high definition, and much more! You've just tried to select this program as one of your favorites. But first, we need you to sign in to PBS using one of the services below.

Audio & Video


Microsoft's latest breakthrough could make DNA-based data centers possible

Using Deep Reinforcement Learning for the Continuous Control of Robotic Arms Machine Learning

Deep reinforcement learning enables algorithms to learn complex behavior, deal with continuous action spaces and find good strategies in environments with high dimensional state spaces. With deep reinforcement learning being an active area of research and many concurrent inventions, we decided to focus on a relatively simple robotic task to evaluate a set of ideas that might help to solve recent reinforcement learning problems. We test a newly created combination of two commonly used reinforcement learning methods, whether it is able to learn more effectively than a baseline. We also compare different ideas to preprocess information before it is fed to the reinforcement learning algorithm. The goal of this strategy is to reduce training time and eventually help the algorithm to converge. The concluding evaluation proves the general applicability of the described concepts by testing them using a simulated environment. These concepts might be reused for future experiments.