If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
Yandex is basically the Google of Russia. Russian technology company Yandex has been working on self-driving vehicles since 2017. Similarly, it partnered with American firm Uber to form a ridesharing and food-delivery joint-venture. On Friday, the two companies announced they're spinning the autonomous-vehicle portion of the business off as a separate entity. Once the financial dust settles, the unimaginatively named Yandex Self Driving Group, or SDG, will be directly owned by both businesses, with Yandex holding about 73% of SDG and Uber around 19%.
Generative adversarial networks (GANs) have attracted intense interest in the field of generative models. However, few investigations focusing either on the theoretical analysis or on algorithm design for the approximation ability of the generator of GANs have been reported. This paper will first theoretically analyze GANs' approximation property. Similar to the universal approximation property of the full connected neural networks with one hidden layer, we prove that the generator with the input latent variable in GANs can universally approximate the potential data distribution given the increasing hidden neurons. Furthermore, we propose an approach named stochastic data generation (SDG) to enhance GANs' approximation ability. Our approach is based on the simple idea of imposing randomness through data generation in GANs by a prior distribution on the conditional probability between the layers. Our approach can be easily implemented by using the reparameterization trick. The experimental results on synthetic dataset verify the improved approximation ability obtained by this SDG approach. In the practical dataset, the NSGAN/WGANGP with SDG can also outperform traditional GANs with little change in the model architectures.
The main goal of this research is to produce a useful software for United Nations (UN), that could help to speed up the process of qualifying the UN documents following the Sustainable Development Goals (SDGs) in order to monitor the progresses at the world level to fight poverty, discrimination, climate changes. In fact human labeling of UN documents would be a daunting task given the size of the impacted corpus. Thus, automatic labeling must be adopted at least as a first step of a multi-phase process to reduce the overall effort of cataloguing and classifying. Deep Learning (DL) is nowadays one of the most powerful tools for state-of-the-art (SOTA) AI for this task, but very often it comes with the cost of an expensive and error-prone preparation of a training-set. In the case of multi-label text classification of domain-specific text it seems that we cannot effectively adopt DL without a big-enough domain-specific training-set. In this paper, we show that this is not always true. In fact we propose a novel method that is able, through statistics like TF-IDF, to exploit pre-trained SOTA DL models (such as the Universal Sentence Encoder) without any need for traditional transfer learning or any other expensive training procedure. We show the effectiveness of our method in a legal context, by classifying UN Resolutions according to their most related SDGs.
The emergence of artificial intelligence (AI) is shaping an increasing range of sectors. For instance, AI is expected to affect global productivity1, equality and inclusion2, environmental outcomes3, and several other areas, both in the short and long term4. Reported potential impacts of AI indicate both positive5 and negative6 impacts on sustainable development. However, to date, there is no published study systematically assessing the extent to which AI might impact all aspects of sustainable development--defined in this study as the 17 Sustainable Development Goals (SDGs) and 169 targets internationally agreed in the 2030 Agenda for Sustainable Development7. This is a critical research gap, as we find that AI may influence the ability to meet all SDGs.
The AI Youth Lab is an initiative by 1M1B in collaboration with the United Nations Sustainable Development Goals (SDGs) for the year 2030. The organisation, because of its association with the various UN bodies, has also been able to take the students it works with to UN headquarters in New York to attend, as well as address, international intra-governmental sessions. Manav Subodh, the co-founder of 1M1B, who is currently out in the countryside, working with the program's rural schools, says, "We've been working with youth around the country for the last four years, and while previously we encouraged our student participants to work on projects which can positively impact the lives of at least 10 people (no small thing in itself), we realised that by using Artificial Intelligence, we could scale up that impact by multiple factors." Subodh intends to set up over 50 labs in schools across India and abroad over the course of 2020, saying, "We provide the infrastructure, training, and all other facilities for our partner schools, so that it doesn't cost them anything. This is especially an important factor for rural schools. In fact, we're also organising mobile AI labs, which can travel between hard-to-reach villages, so as to have maximum reach."
The numbers inside the colored squares represent each of the SDGs (see the Supplementary Data 1). The percentages on the top indicate the proportion of all targets potentially affected by AI and the ones in the inner circle of the figure correspond to proportions within each SDG. The results corresponding to the three main groups, namely Society, Economy, and Environment, are also shown in the outer circle of the figure. The results obtained when the type of evidence is taken into account are shown by the inner shaded area and the values in brackets.
Researchers, entrepreneurs, and policy-makers are increasingly using AI to tackle development challenges. In other words, using AI for a greater good is a real thing. However, it is becoming clear that AI poses as many threats as benefits, although the former ones are usually neglected. I do not want to get into trust, accountability, or safety issues in this short piece (if you want, here there is more), but avoiding the negative effects of AI is why incorporating a set of ethical principles into our technology development process is so paramount. Ethics plays a key role by ensuring that regulations of AI harness its potential while mitigating its risks (Taddeo and Floridi, 2018) and it would help us understand how to use responsibly the power coming from this technology.
Artificial Intelligence (AI) has been put forward as a potential solution for many of the gravest problems facing society, from the opioid crisis to poverty and famine. But although technology clearly has the potential to do a great deal of good, there's a sound business reason that tech companies often pour large amounts of resources into social projects that don't seem to align with their core business of selling software and services. This is down to the fact that tackling social issues often involves developing solutions to problems very similar to those faced by businesses. Additionally, working with governments or NGOs on building these solutions can often mean access to new datasets. Learning derived from these datasets can later be developed into products and services to offer to clients (even if the data itself isn't).
While it is difficult for people to agree on a vision of utopia, it is relatively easy to agree on what a "better world" might look like. The United Nations "Sustainable Development Goals," for example, are an important set of agreed-upon global priorities in the near-term: These objectives (alleviation of poverty, food for all, etc.) are important to keep society from crumbling and to keep large swaths of humanity in misery, and they serve as common reference points for combined governmental or nonprofit initiatives. However, they don't help inform humanity as to which future scenarios we want to move closer or farther to as the human condition is radically altered by technology. As artificial intelligence and neurotechnologies become more and more a part of our lives in the coming two decades, humanity will need a shared set of goals about what kinds of intelligence we develop and unleash in the world, and I suspect that failure to do so will lead to massive conflict. Given these hypotheses, I've argued that there are only two major questions that humanity must ultimately be concerned with: In the rest of this article, I'll argue that current united human efforts at prioritization are important, but incomplete in preventing conflict and maximizing the likelihood of a beneficial long-term (40 year) outcome for humanity.