Should restrictions be placed on the use of artificial intelligence? Google CEO Sundhar Pichai certainly does, and so do a host of other business leaders, including the CEOs of IBM and H2O.ai, as the chorus of calls for putting limits on the spread of the rapidly evolving technology gets louder. Pichai aired his opinion on the matter in an opinion piece published Monday in the Financial Times, titled "Why Google thinks we need to regulate AI" (story is protected by a paywall). In the story, Pichai, who is also CEO of Google's parent company, Alphabet, shared his lifelong love of technology, as well as the breakthroughs that his company is making in using AI to fight breast cancer, improve weather forecasts, and reduce flight delays. As virtuous as these AI-powered accomplishments are, they don't account for the negative impacts that AI also can have, Pichai wrote.
Regarding our morality on how we treat others, which are self-aware, hereby some remarks: 1. Some animal species are self-aware, means recognizing themselves in a mirror, e.g. Even if we acknowledge, there are other species with self-awareness, we still treat them like animals, means less consideration than slaves, because we have no interest to communicate with them as equals. Hence we have self-declared ourselves as most important, others have to comply. WE and only WE ALONE are the dominant species on this planet and this hard obtained achievement must be defended under all circumstances, at all costs.
Artificial intelligence (AI) has already had a profound impact on business and society. Applied AI and machine learning (ML) are creating safer workplaces, more accurate health diagnoses and better access to information for global citizens. The Fourth Industrial Revolution will represent a new era of partnership between humans and AI, with potentially positive global impact. AI advancements can help society solve problems of income inequality and food insecurity to create a more "inclusive, human-centred future" according to the World Economic Forum (WEF). There is nearly limitless potential to AI innovation, which is both positive and frightening.
It will be quite a year in 2020 for digital workplace and employee experience, as a number of important emerging trends shift the landscape. Some long-standing issues will also reach a tipping point for many organizations. I recently laid out the reasons for this in considerable detail. These issues are now consistently a significant challenge for many organizations to deliver well on either digital workplace or employee experience, two closely related concepts. While these issues can't entirely be overcome this year for most organizations, it's safe to say that understanding them and tackling them proactively will product the better result.
Among companies building and deploying artificial intelligence, and the consumers making use of this technology, trust is of paramount importance. Companies want the comfort of knowing how their AI systems are making determinations, and that they are in compliance with any relevant regulations, and consumers want to know when the technology is being used and how (or whether) it will impact their lives. Source: Morning Consult study conducted on behalf of the IBM Policy Lab, January 2020. As outlined in our Principles for Trust and Transparency, IBM has long argued that AI systems need to be transparent and explainable. That's one reason why we supported the OECD AI Principles, and in particular the need to "commit to transparency and responsible disclosure" in the use of AI systems.
This is the second article in TNW's "A beginner's guide to the AI apocalypse" series highlighting the potential existential threats AI poses to humankind. Artificial intelligence promises to revolutionize every facet of technology from healthcare to space exploration. Simply put: all technology in the year 2020 and beyond is AI technology. But, what if making everything better actually makes everything worse? Wall-E syndrome (not a real thing) describes the fear of a future dystopia inhabited by oblivious people who are totally reliant on technology to perform even the simplest of tasks.
This week, Alphabet CEO Sundar Pichai and IBM CEO Ginni Rometty called for AI to get its own regulation system. Alphabet CEO Pichai stated that it was "too important not to", going on to expand by explaining that sectors within AI technology, such as autonomous cars and healthtech, needed their own sets of rules. IBM CEO Rometty joined the discussion with the idea of'precision regulation', stating that it is not the technology itself, but how it is used that should be regulated, using facial recognition as an example of technology that can harm people's privacy as well as having its benefits, such as catching criminals. Asheesh Mehra, co-founder and CEO at AntWorks, explains why regulating AI is important. Without it, the technology won't take the world by storm These announcements have come in spite of recent setbacks in the sphere; just last week it was revealed that the European Commission were considering a five year ban on facial recognition, and Google's last attempt to assemble an AI ethics board lasted under two weeks due to controversy over who was appointed.
As artificial intelligence or AI keeps on discovering its way into our everyday lives, its propensity to interfere with human rights just gets progressively extreme. There are a few lenses through which experts examine artificial intelligence. The utilization of international human rights law and its well-created standards and organizations to examine artificial intelligence frameworks can add to the conversations already occurring, and give a universal vocabulary and forums set up to address power differentials. Moreover, human rights laws contribute a system for solutions. General solutions fall inside four general classifications: data protection rules to ensure rights in the data sets used to create and encourage artificial intelligence systems; special safeguards for government uses of artificial intelligence; safeguards for private sector use of artificial intelligence systems; and investment in more research to keep on looking at the future of artificial intelligence and its potential interferences with human rights.