Developers have a moral duty to create ethical AI

#artificialintelligence 

Developers of artificial intelligence (AI), machine learning (ML) and biometric-related technologies have "a moral and ethical duty" to ensure the technologies are only used as a force for good, according to a report written by the UK's former surveillance camera commissioner. Developers must be cognizant of both the social benefits and risks of the AI-based technologies they produce, and have a responsibility to ensure it is used only for the benefit of society, said the whitepaper, which was published by facial-recognition supplier Corsight AI in response to the European Commission's (EC) proposed Artificial Intelligence Act (AIA). "Organisational values and principles must irreversibly commit to only producing technology as a force for good," it said. "The philosophy must surely be that we put the preservation of internationally recognised standards of human rights, our respect for the rule of law, the security of democratic institutions and the safety of citizens at the heart of what we do." It added a'human in the loop' development strategy is key to assuaging any public concerns over the use of AI and related technologies, in particular facial-recognition technology.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found