IBMVoice: Learning To Trust Artificial Intelligence Systems In The Age Of Smart Machines


The term "artificial intelligence" historically refers to systems that attempt to mimic or replicate human thought. This is not an accurate description of the actual science of artificial intelligence, and it implies a false choice between artificial and natural intelligences. That is why IBM and others have chosen to use different language to describe our work in this field. We feel that "cognitive computing" or "augmented intelligence" -- which describes systems designed to augment human thought, not replicate it -- are more representative of our approach. There is little commercial or societal imperative for creating "artificial intelligence."

Ethical Standards for Artificial Intelligence are Important. Here's Why


Artificial intelligence (AI) relies on big data and machine learning for myriad applications, from autonomous vehicles to algorithmic trading, and from clinical decision support systems to data mining. The availability of large amounts of data is essential to the development of AI. Given China's large population and business sector, both of which use digitized platforms and tools to an unparalleled extent, it may enjoy an advantage in AI. In addition, it has fewer constraints on the use of information gathered through the digital footprint left by people and companies. India has also taken a series of similar steps to digitize its economy, including biometric identity tokens, demonetization and an integrated goods and services tax.

Translation: Excerpts from China's 'White Paper on Artificial Intelligence Standardization'


This translation by Jeffrey Ding, edited by Paul Triolo, covers some of the most interesting parts of the Standards Administration of China's 2018 White Paper on Artificial Intelligence Standardization, a joint effort by more than 30 academic and industry organizations overseen by the Chinese Electronics Standards Institute. Ding, Triolo, and Samm Sacks describe the importance of this white paper and other Chinese government efforts to influence global AI development and policy formulation in their companion piece, "Chinese Interests Take a Big Seat at the AI Governance Table." Historical experience demonstrates that new technologies can often improve productivity and promote societal progress. But at the same time, as artificial intelligence (AI) is still in the early phrase of development, the policies, laws, and standards for safety, ethics, and privacy in this area are worthy of attention. In the case of AI technology, issues of safety, ethics, and privacy have a direct impact on people's trust in AI technology in their interaction experience with AI tools.

How to make Artificial Intelligence fair, transparent and accountable: -


They are becoming more sophisticated, useful, and pervasive. Owing in part to the rapid advancement of powerful algorithms, AI has created not only new business opportunities worldwide, but also concerns from consumers, policymakers anddevelopers of the technology. These concerns need to be addressed. In fact, practitioners of data science, big data, and machine learning have been actively addressing social and ethical concerns that pertain to our increasingly algorithmic society. Can learning algorithms be designed to be fair?