Goto

Collaborating Authors

 Sharma, Naveen


Community Needs and Assets: A Computational Analysis of Community Conversations

arXiv.org Artificial Intelligence

A community needs assessment is a tool used by non-profits and government agencies to quantify the strengths and issues of a community, allowing them to allocate their resources better. Such approaches are transitioning towards leveraging social media conversations to analyze the needs of communities and the assets already present within them. However, manual analysis of exponentially increasing social media conversations is challenging. There is a gap in the present literature in computationally analyzing how community members discuss the strengths and needs of the community. To address this gap, we introduce the task of identifying, extracting, and categorizing community needs and assets from conversational data using sophisticated natural language processing methods. To facilitate this task, we introduce the first dataset about community needs and assets consisting of 3,511 conversations from Reddit, annotated using crowdsourced workers. Using this dataset, we evaluate an utterance-level classification model compared to sentiment classification and a popular large language model (in a zero-shot setting), where we find that our model outperforms both baselines at an F1 score of 94% compared to 49% and 61% respectively. Furthermore, we observe through our study that conversations about needs have negative sentiments and emotions, while conversations about assets focus on location and entities. The dataset is available at https://github.com/towhidabsar/CommunityNeeds.


Infrastructure Ombudsman: Mining Future Failure Concerns from Structural Disaster Response

arXiv.org Artificial Intelligence

On January 28, 2022, at 6.39 a.m. EST, the Fern Hollow Bridge in Pittsburgh, Pennsylvania collapsed. Due to the timing of the failure, thankfully, fewer vehicles were on the bridge and only ten people were injured with no fatalities. Pittsburgh, also known as the City of Bridges, was getting ready for a visit from President Biden that day. Biden visited the collapse site and assured federal assistance to rebuild the bridge on the spot. This infrastructural failure, coinciding with a high-profile political visit and a push towards passing the Build Back Better infrastructure bill, attracted considerable media attention to the flailing infrastructural health in the US. As we were sifting through the social web discussions surrounding this issue, broad themes such as words of compassion for the victims and typical responses in social web political discourse such as political name-calling, conspiracy theories, and partisan mud-slinging emerged. However, apart from these expected social web reactions, we noticed a small minority of interactions that talked about anticipatory failures of other bridges in the US.


Improved Inference of Human Intent by Combining Plan Recognition and Language Feedback

arXiv.org Artificial Intelligence

Conversational assistive robots can aid people, especially those with cognitive impairments, to accomplish various tasks such as cooking meals, performing exercises, or operating machines. However, to interact with people effectively, robots must recognize human plans and goals from noisy observations of human actions, even when the user acts sub-optimally. Previous works on Plan and Goal Recognition (PGR) as planning have used hierarchical task networks (HTN) to model the actor/human. However, these techniques are insufficient as they do not have user engagement via natural modes of interaction such as language. Moreover, they have no mechanisms to let users, especially those with cognitive impairments, know of a deviation from their original plan or about any sub-optimal actions taken towards their goal. We propose a novel framework for plan and goal recognition in partially observable domains -- Dialogue for Goal Recognition (D4GR) enabling a robot to rectify its belief in human progress by asking clarification questions about noisy sensor data and sub-optimal human actions. We evaluate the performance of D4GR over two simulated domains -- kitchen and blocks domain. With language feedback and the world state information in a hierarchical task model, we show that D4GR framework for the highest sensor noise performs 1% better than HTN in goal accuracy in both domains. For plan accuracy, D4GR outperforms by 4% in the kitchen domain and 2% in the blocks domain in comparison to HTN. The ALWAYS-ASK oracle outperforms our policy by 3% in goal recognition and 7%in plan recognition. D4GR does so by asking 68% fewer questions than an oracle baseline. We also demonstrate a real-world robot scenario in the kitchen domain, validating the improved plan and goal recognition of D4GR in a realistic setting.