Commentary: Who should we hold responsible when AI goes wrong? - CNA
SINGAPORE: Who do you think should be responsible when artificial intelligence or algorithms malfunction: The programmer, manufacturer or user? Singapore plans to be a global leader in artificial intelligence (AI) by 2030. This involves, on the one hand, widespread deployment of AI in a variety of settings, and on the other, widespread trust in these AI solutions. Clearly that trust needs to be well-placed, but what does it mean for trust to be well-placed? Certainly, one part of this is AI getting things right reliably often. But that alone is not enough.
Feb-3-2023, 17:25:29 GMT