A new way to build neural networks could make AI more understandable

MIT Technology Review 

The simplification, studied in detail by a group led by researchers at MIT, could make it easier to understand why neural networks produce certain outputs, help verify their decisions, and even probe for bias. Preliminary evidence also suggests that as KANs are made bigger, their accuracy increases faster than networks built of traditional neurons. "It's interesting work," says Andrew Wilson, who studies the foundations of machine learning at New York University. "It's nice that people are trying to fundamentally rethink the design of these [networks]." The basic elements of KANs were actually proposed in the 1990s, and researchers kept building simple versions of such networks.