Goto

Collaborating Authors

 Public Relations


DHS Seeks Public Perception of Facial Recognition, AI Use

#artificialintelligence

The Department of Homeland Security (DHS) is collecting feedback and opinions regarding the use of artificial intelligence (AI) and facial recognition between now and December 6. DHS has used and piloted AI-enabled technologies in several functions like customs and border protection, transportation security, and investigations. Earlier this year, DHS launched new shoe-scanning imaging technology, to be deployed at TSA security checkpoints to improve the efficiency of airport screening and potentially eliminate the need to remove shoes and outerwear when passing through checkpoints. However, AI and facial recognition bring public controversies, such as bias, security, and privacy concerns. "Understanding how the public perceives these technologies and then designing and deploying them in a manner responsive to the public's concerns is critical in gaining public support for DHS's use of these technologies," an information collection request posted to the Federal Register stated.


Empowering Local Communities Using Artificial Intelligence

arXiv.org Artificial Intelligence

Many powerful Artificial Intelligence (AI) techniques have been engineered with the goals of high performance and accuracy. Recently, AI algorithms have been integrated into diverse and real-world applications. It has become an important topic to explore the impact of AI on society from a people-centered perspective. Previous works in citizen science have identified methods of using AI to engage the public in research, such as sustaining participation, verifying data quality, classifying and labeling objects, predicting user interests, and explaining data patterns. These works investigated the challenges regarding how scientists design AI systems for citizens to participate in research projects at a large geographic scale in a generalizable way, such as building applications for citizens globally to participate in completing tasks. In contrast, we are interested in another area that receives significantly less attention: how scientists co-design AI systems "with" local communities to influence a particular geographical region, such as community-based participatory projects. Specifically, this article discusses the challenges of applying AI in Community Citizen Science, a framework to create social impact through community empowerment at an intensely place-based local scale. We provide insights in this under-explored area of focus to connect scientific research closely to social issues and citizen needs.


Heterogeneous Ensemble for ESG Ratings Prediction

arXiv.org Artificial Intelligence

Over the past years, topics ranging from climate change to human rights have seen increasing importance for investment decisions. Hence, investors (asset managers and asset owners) who wanted to incorporate these issues started to assess companies based on how they handle such topics. For this assessment, investors rely on specialized rating agencies that issue ratings along the environmental, social and governance (ESG) dimensions. Such ratings allow them to make investment decisions in favor of sustainability. However, rating agencies base their analysis on subjective assessment of sustainability reports, not provided by every company. Furthermore, due to human labor involved, rating agencies are currently facing the challenge to scale up the coverage in a timely manner. In order to alleviate these challenges and contribute to the overall goal of supporting sustainability, we propose a heterogeneous ensemble model to predict ESG ratings using fundamental data. This model is based on feedforward neural network, CatBoost and XGBoost ensemble members. Given the public availability of fundamental data, the proposed method would allow cost-efficient and scalable creation of initial ESG ratings (also for companies without sustainability reporting). Using our approach we are able to explain 54% of the variation in ratings R2 using fundamental data and outperform prior work in this area.


WHO warns against applying AI models using data from rich counties to everyone else

ZDNet

Using data collected from individuals in high-income earning countries may not perform as well if the healthcare solution was to be used in low- and middle-income settings, according to a new guidance that has been published by the World Health Organization (WHO). "AI systems must be carefully designed to reflect the diversity of socioeconomic and health-care settings and be accompanied by training in digital skills, community engagement, and awareness-raising. Systems based primarily on data of individuals in high-income countries may not perform well for individuals in low- and middle-income settings," WHO wrote in its Ethics and governance of artificial intelligence for health (PDF) report. "Country investments in AI and the supporting infrastructure should therefore help to build effective health-care systems by avoiding AI that encodes biases that are detrimental to equitable provision of and access to healthcare services." WHO added if "appropriate measures" are not taken when developing AI-based healthcare solutions, it could result in "situations where decisions that should be made by providers and patients are transferred to machines, which would undermine human autonomy" and lead to healthcare services being delivered in "unregulated contexts and by unregulated providers, which might create challenges for government oversight of health care".


Causal Learning for Socially Responsible AI

arXiv.org Artificial Intelligence

There have been increasing concerns about Artificial Causal inference is the key to uncovering the real-world Intelligence (AI) due to its unfathomable DGPs [Pearl, 2009]. In the era of big data, especially, it is potential power. To make AI address ethical possible to learn causality by leveraging both causal knowledge challenges and shun undesirable outcomes, researchers and the copious real-world data, i.e., causal learning proposed to develop socially responsible (CL) [Guo et al., 2020a]. There have been growing interests AI (SRAI). One of these approaches is causal learning seeking to improve AI's social responsibility from a CL perspective, (CL).


TechMarketView

#artificialintelligence

Atos has teamed up with French-startup, DreamQuark, to launch a digital solution for banks and insurers dedicated to socially responsible investing "SRI". The launch of the new Sustainable Investment Brain platform coincides with the publication of proposed new European rules around transparent artificial intelligence, with which it complies. DreamQuark is a financial services focused, AI technology specialist and a member of the Atos Scaler accelerator program. Sustainable Investment Brain utilises leverages a variety of financial data, including ESG (environmental social and governance) information provided by Atos to provide insights on potential customers and the most suitable assets and investment products. There is growing interest in responsible investing whilst ESG is an also an area of scrutiny for financial services institutions.


4 Artificial Intelligence Use Cases for Global Health from USAID - ICTworks

#artificialintelligence

Artificial intelligence (AI) has potential to drive game-changing improvements for underserved communities in global health. In response, The Rockefeller Foundation and USAID partnered with the Bill and Melinda Gates Foundation to develop AI in Global Health: Defining a Collective Path Forward. Research began with a broad scan of instances where artificial intelligence is being used, tested, or considered in healthcare, resulting in a catalogue of over 240 examples. This grouping involves tools that leverage AI to monitor and assess population health, and select and target public health interventions based on AI-enabled predictive analytics. It includes AI-driven data processing methods that map the spread and burden of disease while AI predictive analytics are then used to project future disease spread of existing and possible outbreaks.


Crowdsourcing Parallel Corpus for English-Oromo Neural Machine Translation using Community Engagement Platform

arXiv.org Artificial Intelligence

Even though Afaan Oromo is the most widely spoken language in the Cushitic family by more than fifty million people in the Horn and East Africa, it is surprisingly resource-scarce from a technological point of view. The increasing amount of various useful documents written in English language brings to investigate the machine that can translate those documents and make it easily accessible for local language. The paper deals with implementing a translation of English to Afaan Oromo and vice versa using Neural Machine Translation. But the implementation is not very well explored due to the limited amount and diversity of the corpus. However, using a bilingual corpus of just over 40k sentence pairs we have collected, this study showed a promising result. About a quarter of this corpus is collected via Community Engagement Platform (CEP) that was implemented to enrich the parallel corpus through crowdsourcing translations.


Microsoft's sustainability report is a lot more interesting as a 'Minecraft' map

Engadget

Let's face it: sustainability reports are important, but they're usually quite dry reads. Microsoft might have a way to reel you in, however. According to The Verge, Microsoft has released a free Minecraft map that brings the goals of its latest sustainability report to life. "Sustainability City" lets you walk through eco-friendly food production, tour an energy-efficient home and explore concepts ranging from alternative energy to water outflow. You can find the map in the Minecraft Marketplace's "Education Collection," and six lessons are available through Minecraft: Education Edition for teachers who want to share those environmental goals.


What Big Tech and Big Tobacco research funding have in common

#artificialintelligence

Amid declining sales and evidence that smoking causes lung cancer, in the 1950s tobacco companies undertook PR campaigns to reinvent themselves as socially responsible and to shape public opinions. They also started funding research into the relationship between health and tobacco. Now, Big Tech companies like Amazon, Facebook, and Google are following the same playbook to fund AI ethics research in academia, according to a recently published paper by University of Toronto Center for Ethics PhD student Mohamed Abdalla and Harvard Medical School student Moustafa Abdalla. The coauthors conclude that effective solutions to the problem will need to come from institutional or governmental policy changes. The Abdalla brothers argue Big Tech companies aren't just involved with, but are leading, ethics discussions in academic settings.