Microsoft seeks to restrict abuse of its facial recognition AI
Microsoft is planning to implement self-designed ethical principles for its facial recognition technology by the end of March, as it urges governments to push ahead with matching regulation in the field. The company in December called for new legislation to govern artificial intelligence software for recognising faces, advocating for human review and oversight of the technology in some critical cases, as a way to mitigate the risks of biased outcomes, intrusions into privacy and democratic freedoms. "We do need to lead by example and we're working to do that," Microsoft President and chief legal officer Brad Smith said in an interview, adding that some other companies are also putting similar principles into place. Smith said the company plans by the end of March to "operationalise" its principles, which involves drafting policies, building governance systems and engineering tools and testing to make sure it's in line with its goals. It also involves setting controls for the company's global sales and consulting teams to prevent selling the technology in cases where it risks being used for an unwanted purpose.
Jan-25-2019, 09:09:45 GMT
- Country:
- Asia > China
- North America > United States (0.06)
- Industry:
- Government (1.00)
- Law (1.00)
- Law Enforcement & Public Safety > Crime Prevention & Enforcement (0.60)
- Technology: