The Flaw That Could Ruin Generative AI
And because a LLM doesn't "know" when it's quoting from training data, there's no obvious way to prevent the behavior. I spoke with Florian Tramèr, a prominent AI-security researcher and co-author of some of the above studies. It's "an extremely tricky problem to study," he told me. "It's very, very hard to pin down a good definition of memorization." One way to understand the concept is to think of an LLM as an enormous decision tree in which each node is an English word. From a given starting word, an LLM chooses the next word from the entire English vocabulary.
Jan-11-2024, 18:49:53 GMT
- Country:
- North America > United States > California (0.05)
- Industry:
- Law
- Intellectual Property & Technology Law (0.96)
- Litigation (1.00)
- Law
- Technology: