Issues


The AI delusion: why humans trump machines

#artificialintelligence

As well as playing a key role in cracking the Enigma code at Bletchley Park during the Second World War, and conceiving of the modern computer, the British mathematician Alan Turing owes his public reputation to the test he devised in 1950. Crudely speaking, it asks whether a human judge can distinguish between a human and an artificial intelligence based only on their responses to conversation or questions. This test, which he called the "imitation game," was popularised 15 years later in Philip K Dick's science-fiction novel Do Androids Dream of Electric Sheep? But Turing is also widely remembered as having committed suicide in 1954, quite probably driven to it by the hormone treatment he was instructed to take as an alternative to imprisonment for homosexuality (deemed to make him a security risk), and it is only comparatively recently that his genius has been afforded its full due. In 2009, Gordon Brown apologised on behalf of the British government for his treatment; in 2014, his posthumous star rose further again when Benedict Cumberbatch played him in The Imitation Game; and in 2021, he will be the face on the new £50 note.


The battle for ethical AI at the world's biggest machine-learning conference

#artificialintelligence

Facial-recognition algorithms have been at the centre of privacy and ethics debates.Credit: Qilai Shen/Bloomberg/Getty Diversity and inclusion took centre stage at one of the world's major artificial-intelligence (AI) conferences in 2018. But once a meeting with a controversial reputation, last month's Neural Information Processing Systems (NeurIPS) conference in Vancouver, Canada, saw attention shift to another big issue in the field: ethics. The focus comes as AI research increasingly deals with ethical controversies surrounding the application of its technologies -- such as in predictive policing or facial recognition. Issues include tackling biases in algorithms that reflect existing patterns of discrimination in data, and avoiding affecting already vulnerable populations. "There is no such thing as a neutral tech platform," warned Celeste Kidd, a developmental psychologist at University of California, Berkeley, during her NeurIPS keynote talk about how algorithms can influence human beliefs.


Peter Diamandis: 'In the next 10 years, we'll reinvent every industry'

The Guardian

Peter Diamandis is best known as the founder of the XPrize Foundation, which offers big cash prizes as an incentive for tech solutions to big problems. The entrepreneur and investor is also co-founder of the Singularity University, a Silicon Valley-based nonprofit offering education in futurology. His new book, The Future Is Faster Than You Think, argues that the already rapid pace of technological innovation is about to get a whole lot quicker. Do you think people are worried about where technology is going to take us? I can palpably feel how fast things are changing and that the rate of change is accelerating, and I have picked up a growing amount of fear coming from people who don't understand where the world is going.


ProBeat: Why Google is really calling for AI regulation

#artificialintelligence

On Sunday, the Financial Times published an op-ed penned by Sundar Pichai titled "Why Google thinks we need to regulate AI." Whether he wrote it himself or merely signed off on it, Pichai clearly wants the world to know that as the CEO of Alphabet and Google, he believes AI is too important not to be regulated. He has concerns about the potential negative consequences of AI, and like any technology, he believes there needs to be some ground rules. I simply don't believe that's the full story. "Companies such as ours cannot just build promising new technology and let market forces decide how it will be used," writes Pichai. "It is equally incumbent on us to make sure that technology is harnessed for good and available to everyone. Now there is no question in my mind that artificial intelligence needs to be regulated. It is too important not to. The only question is how to approach it."


Calls for AI Regulation Gain Steam

#artificialintelligence

Should restrictions be placed on the use of artificial intelligence? Google CEO Sundhar Pichai certainly does, and so do a host of other business leaders, including the CEOs of IBM and H2O.ai, as the chorus of calls for putting limits on the spread of the rapidly evolving technology gets louder. Pichai aired his opinion on the matter in an opinion piece published Monday in the Financial Times, titled "Why Google thinks we need to regulate AI" (story is protected by a paywall). In the story, Pichai, who is also CEO of Google's parent company, Alphabet, shared his lifelong love of technology, as well as the breakthroughs that his company is making in using AI to fight breast cancer, improve weather forecasts, and reduce flight delays. As virtuous as these AI-powered accomplishments are, they don't account for the negative impacts that AI also can have, Pichai wrote.


Laws Could Make Washington Leader In AI Regs

#artificialintelligence

With an array of bills brought forward by lawmakers, Washington state could become a leader in artificial intelligence regulation.


The Artificial Intelligence Apocalypse (Part 1)

#artificialintelligence

Regarding our morality on how we treat others, which are self-aware, hereby some remarks: 1. Some animal species are self-aware, means recognizing themselves in a mirror, e.g. Even if we acknowledge, there are other species with self-awareness, we still treat them like animals, means less consideration than slaves, because we have no interest to communicate with them as equals. Hence we have self-declared ourselves as most important, others have to comply. WE and only WE ALONE are the dominant species on this planet and this hard obtained achievement must be defended under all circumstances, at all costs.


Why We Need Ethical AI: 5 Initiatives to Ensure Ethics in AI

#artificialintelligence

Artificial intelligence (AI) has already had a profound impact on business and society. Applied AI and machine learning (ML) are creating safer workplaces, more accurate health diagnoses and better access to information for global citizens. The Fourth Industrial Revolution will represent a new era of partnership between humans and AI, with potentially positive global impact. AI advancements can help society solve problems of income inequality and food insecurity to create a more "inclusive, human-centred future" according to the World Economic Forum (WEF). There is nearly limitless potential to AI innovation, which is both positive and frightening.


2020 Predictions for the Future of Work

#artificialintelligence

It will be quite a year in 2020 for digital workplace and employee experience, as a number of important emerging trends shift the landscape. Some long-standing issues will also reach a tipping point for many organizations. I recently laid out the reasons for this in considerable detail. These issues are now consistently a significant challenge for many organizations to deliver well on either digital workplace or employee experience, two closely related concepts. While these issues can't entirely be overcome this year for most organizations, it's safe to say that understanding them and tackling them proactively will product the better result.


Precision Regulation for Artificial Intelligence

#artificialintelligence

Among companies building and deploying artificial intelligence, and the consumers making use of this technology, trust is of paramount importance. Companies want the comfort of knowing how their AI systems are making determinations, and that they are in compliance with any relevant regulations, and consumers want to know when the technology is being used and how (or whether) it will impact their lives. Source: Morning Consult study conducted on behalf of the IBM Policy Lab, January 2020. As outlined in our Principles for Trust and Transparency, IBM has long argued that AI systems need to be transparent and explainable. That's one reason why we supported the OECD AI Principles, and in particular the need to "commit to transparency and responsible disclosure" in the use of AI systems.