If Your Company Uses AI, It Needs an Institutional Review Board
Conversations around AI and ethics may have started as a preoccupation of activists and academics, but now -- prompted by the increasing frequency of headlines of biased algorithms, black box models, and privacy violations -- boards, C-suites, and data and AI leaders have realized it's an issue for which they need a strategic approach. A solution is hiding in plain sight. Other industries have already found ways to deal with complex ethical quandaries quickly, effectively, and in a way that can be easily replicated. Instead of trying to reinvent this process, companies need to adopt and customize one of health care's greatest inventions: the Institutional Review Board, or IRB. Most discussions of AI ethics follow the same flawed formula, consisting of three moves, each of which is problematic from the perspective of an organization that wants to mitigate the ethical risks associated with AI. Here's how these conversations tend to go. First, companies move to identify AI ethics with "fairness" in AI, or sometimes more generally, "fairness, equity, and inclusion."
Apr-2-2021, 04:20:16 GMT
- Country:
- Asia > Japan (0.04)
- North America > United States (0.04)
- Genre:
- Research Report > New Finding (0.60)
- Industry:
- Health & Medicine (1.00)
- Law > Civil Rights & Constitutional Law (0.69)
- Technology: