Godbole, Shantanu
Can a Multichoice Dataset be Repurposed for Extractive Question Answering?
Lynn, Teresa, Altakrori, Malik H., Magdy, Samar Mohamed, Das, Rocktim Jyoti, Lyu, Chenyang, Nasr, Mohamed, Samih, Younes, Aji, Alham Fikri, Nakov, Preslav, Godbole, Shantanu, Roukos, Salim, Florian, Radu, Habash, Nizar
The rapid evolution of Natural Language Processing (NLP) has favored major languages such as English, leaving a significant gap for many others due to limited resources. This is especially evident in the context of data annotation, a task whose importance cannot be underestimated, but which is time-consuming and costly. Thus, any dataset for resource-poor languages is precious, in particular when it is task-specific. Here, we explore the feasibility of repurposing existing datasets for a new NLP task: we repurposed the Belebele dataset (Bandarkar et al., 2023), which was designed for multiple-choice question answering (MCQA), to enable extractive QA (EQA) in the style of machine reading comprehension. We present annotation guidelines and a parallel EQA dataset for English and Modern Standard Arabic (MSA). We also present QA evaluation results for several monolingual and cross-lingual QA pairs including English, MSA, and five Arabic dialects. Our aim is to enable others to adapt our approach for the 120+ other language variants in Belebele, many of which are deemed under-resourced. We also conduct a thorough analysis and share our insights from the process, which we hope will contribute to a deeper understanding of the challenges and the opportunities associated with task reformulation in NLP research.
Supply chain emission estimation using large language models
Jain, Ayush, Padmanaban, Manikandan, Hazra, Jagabondhu, Godbole, Shantanu, Weldemariam, Kommy
Unfortunately, the world remains off track in meeting Development Goals (SDGs), especially goal 13, which focuses the Paris Agreement's target of limiting the temperature rise to on combating climate change and its impacts. To mitigate the effects 1.5 C above pre-industrial levels and reaching net-zero emissions of climate change, reducing enterprise Scope 3 (supply chain by 2050 [14], with a projected temperature rise of around 2.7 C emissions) is vital, as it accounts for more than 90% of total emission above pre-industrial levels by 2100 [22]. To achieve these targets, inventories. However, tracking Scope 3 emissions proves challenging, it is critical to engage non-state actors like enterprises, who have as data must be collected from thousands of upstream and pledged to reduce their GHG emissions, and have significant potential downstream suppliers. To address the above mentioned challenges, to drive more ambitious actions towards climate targets than we propose a first-of-a-kind framework that uses domain-adapted governments [9]. However, a lack of high-quality data and insights NLP foundation models to estimate Scope 3 emissions, by utilizing about an enterprise's operational performance can create barriers to financial transactions as a proxy for purchased goods and services.