AI Is a Black Box. Anthropic Figured Out a Way to Look Inside

WIRED 

For the past decade, AI researcher Chris Olah has been obsessed with artificial neural networks. One question in particular engaged him, and has been the center of his work, first at Google Brain, then OpenAI, and today at AI startup Anthropic, where he is a cofounder. "What's going on inside of them?" he says. "We have these systems, we don't know what's going on. That question has become a core concern now that generative AI has become ubiquitous. Large language models like ChatGPT, Gemini, and Anthropic's own Claude have dazzled people with their language prowess and infuriated people with their tendency to make things up. Their potential to solve previously intractable problems enchants techno-optimists. But LLMs are strangers in our midst. Even the people who build them don't know exactly how they work, and massive effort is required to create guardrails to prevent them from churning out bias, misinformation, and even blueprints for deadly chemical weapons. If the people building the models knew what happened inside these "black boxes,'' it would be easier to make them safer.