wewillmovefig
ec04e8ebba7e132043e5b4832e54f070-AuthorFeedback.pdf
We thank R2 for pointing out the important issue. Accordingly, we will elaborate on the details of model architectures, including the matricesQ and M, in8 Section 3.2.2ofthemaintextrather Wedeeply acknowledge the valuable suggestion. B2: Whereasincluding29 intra-particle attention inB2 already notably improvestheperformance compared toB1,including population-based30 features and inter-particle attention inB3 presents the largest performance boost. This confirms that our method to31 majorly "benefit from the attention mechanisms"; iii)Proposed v.s.