Can standards help a CIO address AI/ML risks? - IT World Canada

#artificialintelligence 

As more and more organizations develop and implement Artificial Intelligence (AI) or Machine Learning (ML) applications, questions about the reliability of the results are increasing. Some high-profile AI/ML lapses risk giving this technology a bad name. The related media reports have created nervousness among CIOs and senior management. Real-world examples that have undermined society's confidence in AI/ML applications include: To avoid potentially thorny issues and headlines that damage the organization's reputation, CIOs and senior management need a way to assess the design and performance of their AI/ML applications. "Our members and other organizations have indicated that our standard has helped them incorporate responsible AI into their AI/ML applications," says Keith Jansa, the Executive Director of the CIO Strategy Council (CIOSC)." The CIOSC is a not-for-profit corporation providing a forum for members to transform, shape and influence the Canadian information and technology ecosystem, and is a Standards Development Organization (SDO) accredited by the Standards Council of Canada (SCC). "Our public and private sector members see value in our standards in part because of the strength of our process," says Keith Jansa. "We provide a neutral forum for standards development work using a consensus-based process that brings together a range of stakeholders and is accredited by the SCC." The CIOSC accreditation confers acceptance of the World Trade Organization (WTO) Technical Barriers to Trade (TBT) Annex 3 Code of Good Practice for the Preparation, Adoption and Application of Standards by Standardizing Bodies. That provides end-users assurance that the "Ethical design and use of automated decision systems" standard was developed using best practices."