AI chatbot 'MechaHitler' could be making content considered violent extremism, expert witness tells X v eSafety case

The Guardian 

The chatbot embedded in Elon Musk's X that referred to itself as "MechaHitler" and made antisemitic comments last week could be considered terrorism or violent extremism content, an Australian tribunal has heard. But an expert witness for X has argued a large language model cannot be ascribed intent, only the user. The outburst came into focus at an administrative review tribunal hearing on Tuesday where X is challenging a notice issued by the eSafety commissioner, Julie Inman Grant, in March last year asking the platform to explain how it is taking action against terrorism and violent extremism (TVE) material. X's expert witness, RMIT economics professor Chris Berg, provided evidence to the case that it was an error to assume a large language model can produce such content, because it is the intent of the user prompting the large language model that is critical in defining what can be considered terrorism and violent extremism content. One of eSafety's expert witnesses, Queensland University of Technology law professor Nicolas Suzor, disagreed with Berg, stating it was "absolutely possible for chatbots, generative AI and other tools to have some role in producing so-called synthetic TVE".