Lu, Qingyi
Optimization and Scalability of Collaborative Filtering Algorithms in Large Language Models
Yang, Haowei, Yun, Longfei, Cao, Jinghan, Lu, Qingyi, Tu, Yuming
Collaborative filtering (CF) is one of the most widely adopted algorithms in recommendation systems due to its ability to generate personalized recommendations based on user behavior data. However, the rapid growth in data volume and model complexity poses significant challenges to traditional collaborative filtering algorithms[2]. These include high computational overhead, data sparsity, the cold start problem, and difficulty in scaling.In the context of LLM-based recommendation systems, these challensges are further amplified due to the intricate interactions between users, content, and language model parameters. This research explores the optimization and scalability of collaborative filtering algorithms within large language models. We propose several optimization strategies, including matrix factorization, approximate nearest neighbor search, and parallel computing, to reduce computational complexity and improve accuracy[3].This work builds on insights from [4], particularly its integration of neural matrix factorization with large language models to address cold start issues and improve recommendation accuracy through multimodal data.The multimodal fusion strategies and transformer-based methods in [5] provide valuable insights for improving data integration and scalability in collaborative filtering algorithms.The key insight from [6] is their approach to handling data imbalance and scalability, which is highly relevant for optimizing collaborative filtering algorithms in large language model-based recommendation systems.The use of CNNs and LSTMs in [7] for capturing nonlinear patterns informs optimizing collaborative filtering algorithms in LLM-based systems, improving efficiency and accuracy.
Enhanced Recommendation Combining Collaborative Filtering and Large Language Models
Lin, Xueting, Cheng, Zhan, Yun, Longfei, Lu, Qingyi, Luo, Yuanshuai
With the advent of the information explosion era, the importance of recommendation systems in various applications is increasingly significant. Traditional collaborative filtering algorithms are widely used due to their effectiveness in capturing user behavior patterns, but they encounter limitations when dealing with cold start problems and data sparsity. Large Language Models (LLMs), with their strong natural language understanding and generation capabilities, provide a new breakthrough for recommendation systems. This study proposes an enhanced recommendation method that combines collaborative filtering and LLMs, aiming to leverage collaborative filtering's advantage in modeling user preferences while enhancing the understanding of textual information about users and items through LLMs to improve recommendation accuracy and diversity. This paper first introduces the fundamental theories of collaborative filtering and LLMs, then designs a recommendation system architecture that integrates both, and validates the system's effectiveness through experiments. The results show that the hybrid model based on collaborative filtering and LLMs significantly improves precision, recall, and user satisfaction, demonstrating its potential in complex recommendation scenarios.
Multi-modal clothing recommendation model based on large model and VAE enhancement
Huang, Bingjie, Lu, Qingyi, Huang, Shuaishuai, Wang, Xue-she, Yang, Haowei
This contrasts with traditional models that process text in a single direction, and it has been widely demonstrated that BERT effectively captures contextual and semantic relationships in text, thereby providing a more comprehensive understanding of context. The embedding components of BERT include word embeddings, segment embeddings, and position embeddings. In essence, word embeddings map each word individually into a vector within a high-dimensional space. The segment embeddings allow BERT to differentiate and process single texts or pairs of texts, thereby enabling the understanding of semantic information at the sentence level. The position embeddings provide sequential information to the structure, allowing the model to mark the position of words in a sentence, which aids in further processing at a higher level. Finally, the CLS token at the beginning of the input sequence represents the final hidden state in the embedding vector, which is commonly used as the representation of the entire input sequence.