Ethical AI isn't just how you build it, it's how you use it
Lapses such as racially biased facial recognition or apparently sexist credit card approval algorithms have thankfully left companies asking how to build AI ethically. Many companies have released "ethical AI" guidelines, such as Microsoft's Responsible AI principles, which requires that AI systems be fair, inclusive, reliable and safe, transparent, respect privacy and security, and be accountable. These are laudable, and will help prevent the harms listed above. Harm can result from what a system is used for, not from unfairness, black-boxyness, or other implementation details. Consider an autonomous Uber: if they are able to recognize people using wheelchairs less accurately than people walking, this can be fixed by using training data reflective of the many ways people traverse a city to build a more fair system.
Nov-5-2022, 08:58:28 GMT
- Country:
- North America > United States (0.30)
- Industry:
- Information Technology > Security & Privacy (0.37)
- Technology: