If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
The AI Youth Lab is an initiative by 1M1B in collaboration with the United Nations Sustainable Development Goals (SDGs) for the year 2030. The organisation, because of its association with the various UN bodies, has also been able to take the students it works with to UN headquarters in New York to attend, as well as address, international intra-governmental sessions. Manav Subodh, the co-founder of 1M1B, who is currently out in the countryside, working with the program's rural schools, says, "We've been working with youth around the country for the last four years, and while previously we encouraged our student participants to work on projects which can positively impact the lives of at least 10 people (no small thing in itself), we realised that by using Artificial Intelligence, we could scale up that impact by multiple factors." Subodh intends to set up over 50 labs in schools across India and abroad over the course of 2020, saying, "We provide the infrastructure, training, and all other facilities for our partner schools, so that it doesn't cost them anything. This is especially an important factor for rural schools. In fact, we're also organising mobile AI labs, which can travel between hard-to-reach villages, so as to have maximum reach."
The numbers inside the colored squares represent each of the SDGs (see the Supplementary Data 1). The percentages on the top indicate the proportion of all targets potentially affected by AI and the ones in the inner circle of the figure correspond to proportions within each SDG. The results corresponding to the three main groups, namely Society, Economy, and Environment, are also shown in the outer circle of the figure. The results obtained when the type of evidence is taken into account are shown by the inner shaded area and the values in brackets.
Researchers, entrepreneurs, and policy-makers are increasingly using AI to tackle development challenges. In other words, using AI for a greater good is a real thing. However, it is becoming clear that AI poses as many threats as benefits, although the former ones are usually neglected. I do not want to get into trust, accountability, or safety issues in this short piece (if you want, here there is more), but avoiding the negative effects of AI is why incorporating a set of ethical principles into our technology development process is so paramount. Ethics plays a key role by ensuring that regulations of AI harness its potential while mitigating its risks (Taddeo and Floridi, 2018) and it would help us understand how to use responsibly the power coming from this technology.
Artificial Intelligence (AI) has been put forward as a potential solution for many of the gravest problems facing society, from the opioid crisis to poverty and famine. But although technology clearly has the potential to do a great deal of good, there's a sound business reason that tech companies often pour large amounts of resources into social projects that don't seem to align with their core business of selling software and services. This is down to the fact that tackling social issues often involves developing solutions to problems very similar to those faced by businesses. Additionally, working with governments or NGOs on building these solutions can often mean access to new datasets. Learning derived from these datasets can later be developed into products and services to offer to clients (even if the data itself isn't).
While it is difficult for people to agree on a vision of utopia, it is relatively easy to agree on what a "better world" might look like. The United Nations "Sustainable Development Goals," for example, are an important set of agreed-upon global priorities in the near-term: These objectives (alleviation of poverty, food for all, etc.) are important to keep society from crumbling and to keep large swaths of humanity in misery, and they serve as common reference points for combined governmental or nonprofit initiatives. However, they don't help inform humanity as to which future scenarios we want to move closer or farther to as the human condition is radically altered by technology. As artificial intelligence and neurotechnologies become more and more a part of our lives in the coming two decades, humanity will need a shared set of goals about what kinds of intelligence we develop and unleash in the world, and I suspect that failure to do so will lead to massive conflict. Given these hypotheses, I've argued that there are only two major questions that humanity must ultimately be concerned with: In the rest of this article, I'll argue that current united human efforts at prioritization are important, but incomplete in preventing conflict and maximizing the likelihood of a beneficial long-term (40 year) outcome for humanity.
The third Tallinn Digital Summit was recently held in Tallinn with a focus on AI for public value and to mark the occasion we spoke to Dr Julien Cornebise, who leads AI for Good at Element AI. He is an awarded scientist who has worked with Amnesty International and he was an early employee at DeepMind. We talked about the hype around AI, but also the all the good it could be used for with the right incentives. We have a government team within Element AI and we've had some requests, but like every contact we get – whether it's from NGOs, agencies or governments – we make very sure to explain to the people who reach out to separate the hype from reality. In some cases, we've said that it's not feasible now, but maybe after a few more years of research. More generally, yes, we want to work with governments around AI for good, because the sustainable development goals are not just for NGOs.
Artificial Intelligence (AI) is on target to have a profound impact on humanity. PwC's Sizing the Prize report shows just how big a game changer AI is likely to become, potentially contributing US$15.7 trillion to the global economy by 2030, the same deadline the United Nations (UN) have set for the Sustainable Development Goals (SDGs). Now is the time to lay the foundations to harness AI's potential and mitigate its potential risks. Now is the time to use this technology to benefit our society and planet. At PwC, we have comprehensive expertise in the development and application of AI solutions for specific business problems and a deep understanding of the SDGs in a business context.
I. Is AI doing any good at all? Researchers, entrepreneurs, and policy-makers are increasingly using AI to tackle development challenges. In other words, using AI for a greater good is a real thing. However, it is becoming clear that AI poses as many threats as benefits, although the former ones are usually neglected. I do not want to get into trust, accountability, or safety issues in this short piece (if you want, here there is more), but avoiding the negative effects of AI is why incorporating a set of ethical principles into our technology development process is so paramount. Ethics plays a key role by ensuring that regulations of AI harness its potential while mitigating its risks (Taddeo and Floridi, 2018) and it would help us understand how to use responsibly the power coming from this technology.
The Ministry of Climate Change and Environment, MOCCAE, organised a special event on Monday to highlight the role of Artificial Intelligence, AI, in tackling food waste. The event witnessed the launch of new AI-enabled product'Vision' that allows kitchens to automatically track food wastage by leveraging AI. A food tech company Winnow launched the product that aims to help chefs easily identify what items are routinely wasted and adjust their purchasing lists and menus to cut down on costs. Headed by Dr. Thani bin Ahmed Al Zeyoudi, Minister of Climate Change and Environment; and Omar bin Sultan Al Olama, Minister of State for Artificial Intelligence; the event brought together leading government and private sector entities to sign the UAE Food Waste Pledge. The ministry in cooperation with Winnow, launched the pledge initiative in mid-2018.
We consider Social Distance Games (SDGs), that is cluster formation games in which the utility of each agent only depends on the composition of the cluster she belongs to, proportionally to her harmonic centrality, i.e., to the average inverse distance from the other agents in the cluster. Under a non-cooperative perspective, we adopt Nash stable outcomes, in which no agent can improve her utility by unilaterally changing her coalition, as the target solution concept. Although a Nash equilibrium for a SDG can always be computed in polynomial time, we obtain a negative result concerning the game convergence and we prove that computing a Nash equilibrium that maximizes the social welfare is NP-hard by a polynomial time reduction from the NP-complete Restricted Exact Cover by 3-Sets problem. We then focus on the performance of Nash equilibria and provide matching upper bound and lower bounds on the price of anarchy of Θ(n), where n is the number of nodes of the underlying graph. Moreover, we show that there exists a class of SDGs having a lower bound on the price of stability of 6/5 ε, for any ε 0. Finally, we characterize the price of stability 5 of SDGs for graphs with girth 4 and girth at least 5, the girth being the length of the shortest cycle in the graph.