Optimizing Mixture-of-Experts Inference Time Combining Model Deployment and Communication Scheduling