DeServe: Towards Affordable Offline LLM Inference via Decentralization

Open in new window