AI trained on AI garbage spits out AI garbage
Current AI models aren't just going to collapse, says Shumailov, but there may still be substantive effects: The improvements will slow down, and performance might suffer. To determine the potential effect on performance, Shumailov and his colleagues fine-tuned a large language model (LLM) on a set of data from Wikipedia, then fine-tuned the new model on its own output over nine generations. The team measured how nonsensical the output was using a "perplexity score," which measures an AI model's confidence in its ability to predict the next part of a sequence; a higher score translates to a less accurate model. The models trained on other models' outputs had higher perplexity scores. "some started before 1360--was typically accomplished by a master mason and a small team of itinerant masons, supplemented by local parish labourers, according to Poyntz Wright. But other authors reject this model, suggesting instead that leading architects designed the parish church towers based on early examples of Perpendicular."
Jul-24-2024, 15:00:00 GMT
- Technology: