Constraint-aware and Ranking-distilled Token Pruning for Efficient Transformer Inference

Open in new window