The Vatican's Pontifical Academy for Life, which began the year by urging the ethical development and application of artificial intelligence (AI), has announced an effort to use technology to fight world hunger, which has worsened during the pandemic. The Vatican institution, in collaboration with IBM, Microsoft and the UN Food and Agriculture Organization, or FAO, is encouraging governments, nonprofits and corporations to assure that technology is used to feed everyone, and to make farmers' lives more efficient and productive. In its quest to assure the transparent, responsible and inclusive use of AI, the Vatican and FAO are pushing for solutions in agriculture that will benefit not just the well off, but also the poor. "We need to face the biggest challenges on the planet," said John E. Kelly III, executive vice president of IBM. Kelly, who participated in the FAO and Pontifical Academy's Sept. 24 virtual conference announcing the effort against hunger, was one of the signers of the Vatican's call for AI ethics in February. The Vatican's effort to promote ethical AI for social good includes a new program to use digital technology to ensure a more sustainable and efficient global food supply.
It was reported that Venture Capital investments into AI related startups made a significant increase in 2018, jumping by 72% compared to 2017, with 466 startups funded from 533 in 2017. PWC moneytree report stated that that seed-stage deal activity in the US among AI-related companies rose to 28% in the fourth-quarter of 2018, compared to 24% in the three months prior, while expansion-stage deal activity jumped to 32%, from 23%. There will be an increasing international rivalry over the global leadership of AI. President Putin of Russia was quoted as saying that "the nation that leads in AI will be the ruler of the world". Billionaire Mark Cuban was reported in CNBC as stating that "the world's first trillionaire would be an AI entrepreneur".
How is AI Ethics and Responsible AI currently being taught in Computer Science and Engineering Curriculums across Africa? What issues related to this topic are relevant to students and faculty? And what roadblocks or challenges are instructors facing to bring more discussion of AI ethics to classrooms? The goal of this workshop is to foster a discussion on how to effectively integrate AI Ethics into Computer Science/Engineering programs at African Universities. This is an initial step to gather perspectives on the current situation at representative universities in different countries in Africa, and to initiate a discussion on how we can better support each other with lessons learned and share materials/curriculums to further develop AI ethics programs in higher education. After identifying the current state, the interests of students and faculty and the needs of departments in this workshop session, the goal is to continue the series with more in-depth workshops on specific topics.
We have found that AI is a common tool used by businesses in the digital TV or over-the-top streaming media space. With high acquisition costs and escalating content costs, operators are looking for a rich content platform that drives user engagement and value with data. By providing richer, data-backed insights into audience preferences and behaviours, AI has provided the opportunity for marketers in this space to cut through the noise of local regional content and create customised offerings for different customer segments through storytelling. We have been able to apply these learnings to other clients, who are now using large-scale AI and data analytics to predict and understand customer behaviour and create more meaningful engagement along their customer journey. For example, a client of ours in the quick service restaurant (QSR) segment was seeking a solution to trigger repeat customer orders via their app.
In February of this year, the Department of Defense (DoD) issued five Ethical Principles for Artificial Intelligence (AI): Responsible, Equitable, Traceable, Reliable and Governable. The DoD principles build off recommendations from 2019 by the Defense Innovation Board and the interim report of the National Security Commission on AI (NSCAI). The defense industry and others in the private sector have also been considering ethical issues regarding AI, including the issue of whether businesses should have an AI code of ethics. When cyber first became an issue about 22-years ago, the trend was to raise awareness and think through the consequences. Similarly, now we are developing awareness of the issues and beginning to think through the consequences of AI.
Artificial intelligence has become a technological buzzword, often solely referred to AI rather than depicting the possibly infinite amount of practical applications that artificial intelligence can actually provide, or the intricacies involved from industry to industry, and region to region. To discuss some of the many applications for artificial intelligence, as well as some of the considerations to be taken into account to create more accurate and less biased machine learning systems, I had the pleasure of speaking with Nitendra Rajput, VP and Head of Mastercard's AI Garage. Nitendra Rajput is the Vice President and Head of Mastercard's AI Garage, setting up the centre to enable it to solve problems across various business verticals globally with machine learning processes, increasing efficiencies across the business as well as mitigating instances of fraud. Nitendra has over 20 years experience working in the fields artificial intelligence, machine learning, and mobile interactions, after realising a gap in the market for developing speech recognition systems for vocally-led countries, such as India. Prior to Mastercard's AI Garage, he spent 18 years at IBM Research, working on different aspects of machine learning, human-computer interaction, software engineering and mobile sensing.
Last week, on September 15 and 16, the Pentagon's Joint Artificial Intelligence Center (JAIC) held a meeting with officials from 13 countries, including but not only U.S. allies, around the ethical military uses of artificial intelligence, the first of its kind. Breaking Defense quotes Mark Beall, the JAIC's head of strategy and policy, who called the meeting "historic," as saying, "This group of … countries, to my knowledge, has never been brought together under one banner before." Earlier this year, the Pentagon adopted a set of ethics guidelines around AI use. At a time when China and Russia's pursuit of military AI has raised considerable alarm in Western capitals, Beall noted that the meeting was not about creating a coalition against specific countries. Rather, "we're really focused on, right now, rallying around [shared] core values like digital liberty and human rights… international humanitarian law," Beall said.
Why kids need special protection from AI's influence Algorithms can change the course of children's lives. Kids are interacting with Alexas that can record their voice data and influence their speech and social development. They're binging videos on TikTok and YouTube pushed to them by recommendation systems that end up shaping their worldviews. Algorithms are also increasingly used to determine what their education is like, whether they'll receive health care, and even whether their parents are deemed fit to care for them. Sometimes this can have devastating effects: this past summer, for example, thousands of students lost their university admissions after algorithms--used in lieu of pandemic-canceled standardized tests--inaccurately predicted their academic performance.
Artificial intelligence (AI) is no longer a thing of science fiction, it exists in the world all around us, automating simple tasks and dramatically improving our lives. But as AI and automation becomes increasingly capable, how will this alternative labor source affect your future workforce? In this article, we'll take a look at both some optimistic and pessimistic views of the future of our jobs amidst increasing AI capabilities. A two-year study from McKinsey Global Institute suggests that by 2030, intelligent agents and robots could replace as much as 30 percent of the world's current human labor. McKinsey suggests that, in terms of scale, the automation revolution could rival the move away from agricultural labor during the 1900s in the United States and Europe, and more recently, the explosion of the Chinese labor economy.
Welcome to part 4 of my AI and GeoAI Series that will cover the more technical aspects of GeoAI and ArcGIS. Previously, part 1 of this series covered the Future Impacts of AI on Mapping and Modernization which introduced the concept of GeoAI and why you should care about having an AI as a future coworker. Part 2 of the series, GIS, Artificial Intelligence, and Automation in the Workplace covered specific geospatial professions that will be drastically effected by introduction of GeoAI technology in the workplace. Part 3 addressed Teaming with the Machine - AI in the workplace the emergence of the new geospatial working relationship between information, humans, and artificial intelligence to be successful in an organizations mission. For part 4, we will address 3 specific GeoAI areas in ArcGIS that will help you with your journey to developing your Deep Learning workflows.