From n-gram to Attention: How Model Architectures Learn and Propagate Bias in Language Modeling