Tell Me What's Next: Textual Foresight for Generic UI Representations
Burns, Andrea, Saenko, Kate, Plummer, Bryan A.
–arXiv.org Artificial Intelligence
Mobile app user interfaces (UIs) are rich with action, text, structure, and image content that can be utilized to learn generic UI representations for tasks like automating user commands, summarizing content, and evaluating the accessibility of user interfaces. Prior work has learned strong visual representations with local or global captioning losses, but fails to retain both granularities. To combat this, we propose Textual Foresight, a novel pretraining objective for learning UI screen representations. Textual Foresight generates global text descriptions of future UI states given a current UI and local action taken. Our approach requires joint reasoning over elements and entire screens, resulting in improved UI features: on generation tasks, UI agents trained with Textual Foresight outperform state-of-the-art by 2% with 28x fewer images. We train with our newly constructed mobile app dataset, OpenApp, which results in the first public dataset for app UI representation learning. OpenApp enables new baselines, and we find Textual Foresight improves average task performance over them by 5.7% while having access to 2x less data.
arXiv.org Artificial Intelligence
Jun-11-2024
- Country:
- North America > United States (0.15)
- Genre:
- Research Report (0.82)
- Industry:
- Information Technology (0.68)
- Technology:
- Information Technology
- Artificial Intelligence
- Machine Learning > Neural Networks
- Deep Learning (0.47)
- Natural Language
- Chatbot (0.47)
- Large Language Model (0.69)
- Text Processing (0.46)
- Robots (0.93)
- Vision (0.68)
- Machine Learning > Neural Networks
- Communications (1.00)
- Human Computer Interaction (0.87)
- Artificial Intelligence
- Information Technology