AVA-VLA: Improving Vision-Language-Action models with Active Visual Attention