Goto

Collaborating Authors

 facing


Facing Off World Model Backbones: RNNs, Transformers, and S4

Neural Information Processing Systems

World models are a fundamental component in model-based reinforcement learning (MBRL). To perform temporally extended and consistent simulations of the future in partially observable environments, world models need to possess long-term memory. However, state-of-the-art MBRL agents, such as Dreamer, predominantly employ recurrent neural networks (RNNs) as their world model backbone, which have limited memory capacity. In this paper, we seek to explore alternative world model backbones for improving long-term memory. In particular, we investigate the effectiveness of Transformers and Structured State Space Sequence (S4) models, motivated by their remarkable ability to capture long-range dependencies in low-dimensional sequences and their complementary strengths.


Facing a Changing Industry, AI Activists Rethink Their Strategy

WIRED

In the spring of 2018, thousands of Google employees pressured the company into dropping a major artificial intelligence contract with the Pentagon. The tech giant even pledged to not use its AI for weapons or certain surveillance systems in the future. The victory, which came amid a wave of unprecedented employee-led protests, helped inspire a new generation of tech activists in Silicon Valley. But seven years later, the legacy of that moment is more complicated. Google recently revised its AI ethics principles to allow some of the use cases it previously banned, and companies across the industry are releasing powerful new AI tools at breakneck speed.


Facing Off World Model Backbones: RNNs, Transformers, and S4

Neural Information Processing Systems

World models are a fundamental component in model-based reinforcement learning (MBRL). To perform temporally extended and consistent simulations of the future in partially observable environments, world models need to possess long-term memory. However, state-of-the-art MBRL agents, such as Dreamer, predominantly employ recurrent neural networks (RNNs) as their world model backbone, which have limited memory capacity. In this paper, we seek to explore alternative world model backbones for improving long-term memory. In particular, we investigate the effectiveness of Transformers and Structured State Space Sequence (S4) models, motivated by their remarkable ability to capture long-range dependencies in low-dimensional sequences and their complementary strengths.


AI Startup Buzz Is Facing a Reality Check

WSJ.com: WSJD - Technology

Venture investors are realizing that generative artificial intelligence might not be enough to stem yearslong startup downturn


Elon Musk's Humanoid Robot Is Facing Some Serious Skepticism

#artificialintelligence

Elon Musk confirmed in a tweet that the same team working on Tesla's Autopilot is also developing the Tesla Bot, and the company is likewise hiring quite a number of people who will work on the robotics project. Does that mean we're nearing a future in which human-like robots can be found in every pocket of society, performing the tasks typically only humans can do? Not quite -- and, in fact, Reuters reports that both analysts and experts are "skeptical" that Tesla will be able to show off the tech arguably necessary to make the expenses on the project seem worth it. When looking at AI-powered vehicles, we have a long way to go before we achieve full autonomy, as defined by SAE International. As per the SAE International definition, Tesla's Autopilot is at level 2 -- although the company is on course to introduce level 3.


Facing a Classification Project in Machine Learning - WebSystemer.no

#artificialintelligence

After modeling, the next stage is always analyzing how our model is performing and why it is doing what it's doing. However, if you've had the chance to work with ensemble methods, you probably already know that these algorithms are usually known as "black-box models". These models lack explicability and interpretability since the way they usually work implies one or several layers of a machine making decisions without human supervision, apart from a group of rules or parameters set. More often than not, not even the most expert professionals in the field can understand the function that is actually created by, for example, training a neural network. In this sense, some of the most classical machine learning models were actually better. That's why, for the sake of this post, we'll be analyzing the feature importance of our project using a classic Logistic Regression.


Artificial intelligence and ethics: Facing the challenges for WA businesses

#artificialintelligence

How does technology challenge WA businesses, and how can governance professionals proactively manage the risks? I work predominantly in the technology industry, and I can't afford to stay still. This is an industry that is continuously reinventing itself, and I've had to constantly re-educate myself to stay up to speed with every new development. Widening this scope and reflecting on WA's dominating industries, there has been a shift in from back-office technology (traditional IT) to more operational technology (OT). And the integration of IT and OT means technology is becoming an enterprise asset.


Facing the Future

#artificialintelligence

Most people believe that lifelong learning is critical for staying prepared as artificial intelligence transforms the job market worldwide. Yet, higher education, business and government are all failing to provide solutions.


Facing The Unknown Future Of Work As AI Changes The Rules Of Business

#artificialintelligence

Even as we read about the first layoffs blamed at least in part on automation, there is still cause for optimism. While easily automated jobs may fall by the wayside, it's important to remember that new jobs managing and leveraging artificial intelligence (AI) technology are being created. Titles like edge computing release manager, edge stream researcher and AI analytics executive did not exist until very recently. Earlier this year, I hired a vice president of AI and robotic process automation. How many of us thought even ten years ago that a role like this would be so central for business software development?


Science Is Facing A Brain Drain

#artificialintelligence

Science is facing a major generational issue in the coming years, and it's all our fault. For decades it's been the case that there are way more people interested in careers in science than there are permanent positions for them. There are a lot of undergrads, with some of them moving on to grad school. And some of those grad students will go on to take postdoctoral research positions, and some of the best and brightest postdocs will end up as distinguished members of faculty at respectable institutions. It's set up almost like a competition.