Efficient LLM inference solution on Intel GPU

Open in new window