inarxivpreprintarxiv
Country:
- Europe > Germany > Bavaria > Upper Bavaria > Munich (0.04)
- Asia > Myanmar > Tanintharyi Region > Dawei (0.04)
50eb39ab717507cccbe2b8590de32030-Supplemental-Conference.pdf
B.1 QAarchitectureandpre-training QA architecture The QA architecture is based on the Episodic Transformer architecture [6] depicted in Figure 1. QA training data set We train the QA with a mix of 4 tasks: Open-Large, PickUp-Large, PutNextTo-Local, and Sequence-Medium. We use a mix of tasks to push the QA to leverage the compositionality of language. Indeed, the Sequence task is created by putting in sequence two tasks from Open, PickUp, and PutNextTo. Theprobability to keep a question depends on the number of words in common between the goal corresponding to the trajectories and the random goal.
Country:
- North America > Dominican Republic (0.04)
- Europe > France > Île-de-France > Paris > Paris (0.04)
Technology: