trackstar
Scalable Influence and Fact Tracing for Large Language Model Pretraining
Chang, Tyler A., Rajagopal, Dheeraj, Bolukbasi, Tolga, Dixon, Lucas, Tenney, Ian
Training data attribution (TDA) methods aim to attribute model outputs back to specific training examples, and the application of these methods to large language model (LLM) outputs could significantly advance model transparency and data curation. However, it has been challenging to date to apply these methods to the full scale of LLM pretraining. In this paper, we refine existing gradient-based methods to work effectively at scale, allowing us to retrieve influential examples for an 8B-parameter language model from a pretraining corpus of over 160B tokens with no need for subsampling or pre-filtering. Our method combines several techniques, including optimizer state correction, a task-specific Hessian approximation, and normalized encodings, which we find to be critical for performance at scale. In quantitative evaluations on a fact tracing task, our method performs best at identifying examples that influence model predictions, but classical, model-agnostic retrieval methods such as BM25 still perform better at finding passages which explicitly contain relevant facts. These results demonstrate a misalignment between factual *attribution* and causal *influence*. With increasing model size and training tokens, we find that influence more closely aligns with factual attribution. Finally, we examine different types of examples identified as influential by our method, finding that while many directly entail a particular fact, others support the same output by reinforcing priors on relation types, common entities, and names. We release our prompt set and model outputs, along with a web-based visualization tool to explore influential examples for factual predictions, commonsense reasoning, arithmetic, and open-ended generation for an 8B-parameter LLM.
- Africa > Ethiopia > Addis Ababa > Addis Ababa (0.04)
- Europe > Netherlands > North Holland > Amsterdam (0.04)
- Europe > Germany > Bavaria > Upper Bavaria > Munich (0.04)
- (20 more...)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Performance Analysis > Accuracy (0.46)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning > Gradient Descent (0.34)
TrackStar Launches AI Software to Make Lending More Accurate
TrackStar.ai, a company led by credit industry veterans that specializes in predictive credit technology, today announced the launch of a new proprietary, predictive API designed to help lending institutions determine consumer lending potential. By utilizing this first-of-its-kind API, lenders are able to make better decisions about qualifying current and prior loan applicants. The result is lower acquisition costs and churn, all while reducing lender's reliance on outside partnerships for leads. TrackStar's API is designed for enterprise level banking institutions and lenders to help them optimize the customer acquisition and retention process. TrackStar's predictive AI layer determines which negative credit items could be removed from a customer's credit history, allowing lenders to extend offers to customers who might normally get declined or not even considered as qualifying loan applicants.