Goto

Collaborating Authors

Results


Alessandro Ferrari on LinkedIn: #AI #artificialintelligence #machinelearning

#artificialintelligence

This work exploits a large source domain for pretraining and transfer the diversity information from source to target. Highlights: Anchor-based strategy for realism over regions in latent space A novel cross-domain distance consistency loss Existing models can be leveraged to model new distributions with less data Extensive results demonstrates qualitatively and quantitatively that this few-shot model automatically discovers correspondences between source and target domains and generates more diverse and realistic images than previous methods.


Using artificial intelligence to manage extreme weather events

#artificialintelligence

McGill study aims to make social media contributions more useful to crisis managers Can combining deep learning (DL)— a subfield of artificial intelligence— with social network analysis (SNA), make social media contributions about extreme weather events a useful tool for crisis managers, first responders and government scientists? An interdisciplinary team of McGill researchers has brought these tools to the forefront in an effort to understand and manage extreme weather events. The researchers found that by using a noise reduction mechanism, valuable information could be filtered from social media to better assess trouble spots and assess users’ reactions vis-à-vis extreme weather events. The results of the study are published in the Journal of Contingencies and Crisis Management. Diving into a sea of information “We reduced the noise by finding out who was being listened to, and which were authoritative sources,” explains Renee Sieber, Associate Professor in McGill’s Department of Geography and lead author of this study. “This ability is important because it is quite difficult to assess the validity of the information shared by Twitter users.” The team based their study on Twitter data from the March 2019 Nebraska floods in the United States, which caused over $1 billion in damage and widespread evacuations of residents. In total, over 1,200 tweets were analyzed and classified. “Social network analysis can identify where ​people get their information during an extreme weather event. Deep learning allows us to better understand the content ​ of this information by classifying thousands of tweets into fixed categories, for example, ‘infrastructure and utilities damage’ or ‘sympathy and emotional support’,” says Sieber. The researchers then introduced a two-tiered DL classification model – a first in terms of integrating these methods in a way that could be useful to crisis managers. The study highlighted some issues regarding the use of social media analysis for this purpose, notably its failure to note that events are far more contextual than expected by labelled datasets, such as the CrisisNLP, and the lack of a universal language to categorize terms related to crisis management. The preliminary exploration performed by the researchers also found that a celebrity call out was featured prominently – this was indeed the case for the 2019 Nebraska floods, where a tweet from pop singer Justin Timberlake was shared by a large number of users, though it did not prove to be of use for crisis managers. “Our findings tell us that information content varies between different types of events, contrary to the belief that there is a universal language to categorize crisis management; this limits the use of labelled datasets on just a few types of events, as search terms may change from one event to another.” “The vast amount of social media data the public contributes about weather suggests it can provide critical information in crises, such as snowstorms, floods, and ice storms. We are currently exploring transferring this model to different types of weather crises and addressing the shortcomings of existing supervised approaches by combining these with other methods,” says Sieber. About this study “Using deep learning and social network analysis to understand and manage extreme flooding” by Renee Sieber and al. was published in the Journal of Contingencies and Crisis Management. This study was funded by Environment Canada. About McGill University Founded in Montreal, Quebec, in 1821, McGill University is Canada’s top ranked medical doctoral university. McGill is consistently ranked as one of the top universities, both nationally and internationally. It is a world-renowned institution of higher learning with research activities spanning two campuses, 11 faculties, 13 professional schools, 300 programs of study and over 40,000 students, including more than 10,200 graduate students. McGill attracts students from over 150 countries around the world, its 12,800 international students making up 31% per cent of the student body. Over half of McGill students claim a first language other than English, including approximately 19% of our students who say French is their mother tongue.


Deep Sensing of Urban Waterlogging

arXiv.org Artificial Intelligence

In the monsoon season, sudden flood events occur frequently in urban areas, which hamper the social and economic activities and may threaten the infrastructure and lives. The use of an efficient large-scale waterlogging sensing and information system can provide valuable real-time disaster information to facilitate disaster management and enhance awareness of the general public to alleviate losses during and after flood disasters. Therefore, in this study, a visual sensing approach driven by deep neural networks and information and communication technology was developed to provide an end-to-end mechanism to realize waterlogging sensing and event-location mapping. The use of a deep sensing system in the monsoon season in Taiwan was demonstrated, and waterlogging events were predicted on the island-wide scale. The system could sense approximately 2379 vision sources through an internet of video things framework and transmit the event-location information in 5 min. The proposed approach can sense waterlogging events at a national scale and provide an efficient and highly scalable alternative to conventional waterlogging sensing methods.


The AI Index 2021 Annual Report

arXiv.org Artificial Intelligence

Welcome to the fourth edition of the AI Index Report. This year we significantly expanded the amount of data available in the report, worked with a broader set of external organizations to calibrate our data, and deepened our connections with the Stanford Institute for Human-Centered Artificial Intelligence (HAI). The AI Index Report tracks, collates, distills, and visualizes data related to artificial intelligence. Its mission is to provide unbiased, rigorously vetted, and globally sourced data for policymakers, researchers, executives, journalists, and the general public to develop intuitions about the complex field of AI. The report aims to be the most credible and authoritative source for data and insights about AI in the world.


"Short is the Road that Leads from Fear to Hate": Fear Speech in Indian WhatsApp Groups

arXiv.org Artificial Intelligence

WhatsApp is the most popular messaging app in the world. Due to its popularity, WhatsApp has become a powerful and cheap tool for political campaigning being widely used during the 2019 Indian general election, where it was used to connect to the voters on a large scale. Along with the campaigning, there have been reports that WhatsApp has also become a breeding ground for harmful speech against various protected groups and religious minorities. Many such messages attempt to instil fear among the population about a specific (minority) community. According to research on inter-group conflict, such `fear speech' messages could have a lasting impact and might lead to real offline violence. In this paper, we perform the first large scale study on fear speech across thousands of public WhatsApp groups discussing politics in India. We curate a new dataset and try to characterize fear speech from this dataset. We observe that users writing fear speech messages use various events and symbols to create the illusion of fear among the reader about a target community. We build models to classify fear speech and observe that current state-of-the-art NLP models do not perform well at this task. Fear speech messages tend to spread faster and could potentially go undetected by classifiers built to detect traditional toxic speech due to their low toxic nature. Finally, using a novel methodology to target users with Facebook ads, we conduct a survey among the users of these WhatsApp groups to understand the types of users who consume and share fear speech. We believe that this work opens up new research questions that are very different from tackling hate speech which the research community has been traditionally involved in.


Problematic Machine Behavior: A Systematic Literature Review of Algorithm Audits

arXiv.org Artificial Intelligence

While algorithm audits are growing rapidly in commonality and public importance, relatively little scholarly work has gone toward synthesizing prior work and strategizing future research in the area. This systematic literature review aims to do just that, following PRISMA guidelines in a review of over 500 English articles that yielded 62 algorithm audit studies. The studies are synthesized and organized primarily by behavior (discrimination, distortion, exploitation, and misjudgement), with codes also provided for domain (e.g. search, vision, advertising, etc.), organization (e.g. Google, Facebook, Amazon, etc.), and audit method (e.g. sock puppet, direct scrape, crowdsourcing, etc.). The review shows how previous audit studies have exposed public-facing algorithms exhibiting problematic behavior, such as search algorithms culpable of distortion and advertising algorithms culpable of discrimination. Based on the studies reviewed, it also suggests some behaviors (e.g. discrimination on the basis of intersectional identities), domains (e.g. advertising algorithms), methods (e.g. code auditing), and organizations (e.g. Twitter, TikTok, LinkedIn) that call for future audit attention. The paper concludes by offering the common ingredients of successful audits, and discussing algorithm auditing in the context of broader research working toward algorithmic justice.


Deriving the Traveler Behavior Information from Social Media: A Case Study in Manhattan with Twitter

arXiv.org Machine Learning

Social media platforms, such as Twitter, provide a totally new perspective in dealing with the traffic problems and is anticipated to complement the traditional methods. The geo-tagged tweets can provide the Twitter users' location information and is being applied in traveler behavior analysis. This paper explores the full potentials of Twitter in deriving travel behavior information and conducts a case study in Manhattan Area. A systematic method is proposed to extract displacement information from Twitter locations. Our study shows that Twitter has a unique demographics which combine not only local residents but also the tourists or passengers. For individual user, Twitter can uncover his/her travel behavior features including the time-of-day and location distributions on both weekdays and weekends. For all Twitter users, the aggregated travel behavior results also show that the time-of-day travel patterns in Manhattan Island resemble that of the traffic flow; the identification of OD pattern is also promising by comparing with the results of travel survey.


Privacy Information Classification: A Hybrid Approach

arXiv.org Artificial Intelligence

A large amount of information has been published to online social networks every day. Individual privacy-related information is also possibly disclosed unconsciously by the end-users. Identifying privacy-related data and protecting the online social network users from privacy leakage turn out to be significant. Under such a motivation, this study aims to propose and develop a hybrid privacy classification approach to detect and classify privacy information from OSNs. The proposed hybrid approach employs both deep learning models and ontology-based models for privacy-related information extraction. Extensive experiments are conducted to validate the proposed hybrid approach, and the empirical results demonstrate its superiority in assisting online social network users against privacy leakage.


New study finds tech elites view the world with more meritocracy

ZDNet

A new study has revealed that while the top 100 richest people in tech share similar views to other wealthy people, they are also more focused on meritocracy. The research, published in PLOS One, used data sets based on tweets by these individuals who were named by Forbes as the top 100 richest people in the tech world, plus their statements on websites about their philanthropic endeavours. As part of the study, the researchers analysed 49,790 tweets from 30 verified Twitter account holders within the tech elite subject group and 60 mission statements from tech elite-run philanthropic websites, plus 17 statements from tech elites and other wealthy individuals not associated with the tech world for comparison purposes. The Twitter text analyses, according to the research, revealed tech elites used Twitter to tweet about subjects that placed emphasis on disruption, positivity, and temporality compared with the average user. Their most frequently used words were'new' and'great', and referred mostly to their peers and other tech firms. At the same time, the authors found that while tweets showed the tech elites did not see a significant difference between power and money or power and democracy, they did note the tech elites denied a connection between democracy and money, a view that was not shared by ordinary Twitter users.


Machine Learning Towards Intelligent Systems: Applications, Challenges, and Opportunities

arXiv.org Artificial Intelligence

The emergence and continued reliance on the Internet and related technologies has resulted in the generation of large amounts of data that can be made available for analyses. However, humans do not possess the cognitive capabilities to understand such large amounts of data. Machine learning (ML) provides a mechanism for humans to process large amounts of data, gain insights about the behavior of the data, and make more informed decision based on the resulting analysis. ML has applications in various fields. This review focuses on some of the fields and applications such as education, healthcare, network security, banking and finance, and social media. Within these fields, there are multiple unique challenges that exist. However, ML can provide solutions to these challenges, as well as create further research opportunities. Accordingly, this work surveys some of the challenges facing the aforementioned fields and presents some of the previous literature works that tackled them. Moreover, it suggests several research opportunities that benefit from the use of ML to address these challenges.