Comparison of different Unique hard attention transformer models by the formal languages they can recognize