Token Prepending: A Training-Free Approach for Eliciting Better Sentence Embeddings from LLMs

Open in new window