AIhub coffee corner: Responsible and trustworthy AI
This month, our trustees tackle the topic of trustworthy AI. Joining the conversation this time are: Tom Dietterich (Oregon State University), Sabine Hauert (University of Bristol), and Sarit Kraus (Bar-Ilan University). Sabine Hauert: There was a big trustworthy autonomous systems conference a few weeks back in London, and on the back of that they've launched a big responsible AI portfolio. I know Europe has been focusing on trustworthiness and how responsible these algorithms are. Deploying these systems in a responsible way is something that people are thinking about more and more. It was interesting at that conference because, while a lot of it had to do with ethics, interfacing with humans and thinking holistically about these algorithms, there was also a strong military track discussing how you make military tools trustworthy. I always find it quite interesting that trustworthiness and responsible AI mean completely different things to different communities.
May-7-2024, 13:50:21 GMT
- Country:
- Europe > United Kingdom (0.04)
- North America > United States
- Oregon (0.24)
- Industry:
- Government (0.34)
- Technology:
- Information Technology > Artificial Intelligence
- Issues > Social & Ethical Issues (1.00)
- Machine Learning (1.00)
- Natural Language (1.00)
- Information Technology > Artificial Intelligence