On The Role of Pretrained Language Models in General-Purpose Text Embeddings: A Survey