Goto

Collaborating Authors

 poverty rate


The MASK Benchmark: Disentangling Honesty From Accuracy in AI Systems

Ren, Richard, Agarwal, Arunim, Mazeika, Mantas, Menghini, Cristina, Vacareanu, Robert, Kenstler, Brad, Yang, Mick, Barrass, Isabelle, Gatti, Alice, Yin, Xuwang, Trevino, Eduardo, Geralnik, Matias, Khoja, Adam, Lee, Dean, Yue, Summer, Hendrycks, Dan

arXiv.org Artificial Intelligence

As large language models (LLMs) become more capable and agentic, the requirement for trust in their outputs grows significantly, yet at the same time concerns have been mounting that models may learn to lie in pursuit of their goals. To address these concerns, a body of work has emerged around the notion of "honesty" in LLMs, along with interventions aimed at mitigating deceptive behaviors. However, evaluations of honesty are currently highly limited, with no benchmark combining large scale and applicability to all models. Moreover, many benchmarks claiming to measure honesty in fact simply measure accuracy--the correctness of a model's beliefs--in disguise. In this work, we introduce a large-scale human-collected dataset for measuring honesty directly, allowing us to disentangle accuracy from honesty for the first time. Across a diverse set of LLMs, we find that while larger models obtain higher accuracy on our benchmark, they do not become more honest. Surprisingly, while most frontier LLMs obtain high scores on truthfulness benchmarks, we find a substantial propensity in frontier LLMs to lie when pressured to do so, resulting in low honesty scores on our benchmark. We find that simple methods, such as representation engineering interventions, can improve honesty. These results underscore the growing need for robust evaluations and effective interventions to ensure LLMs remain trustworthy.


Satyrn: A Platform for Analytics Augmented Generation

Sterbentz, Marko, Barrie, Cameron, Shahi, Shubham, Dutta, Abhratanu, Hooshmand, Donna, Pack, Harper, Hammond, Kristian J.

arXiv.org Artificial Intelligence

Large language models (LLMs) are capable of producing documents, and retrieval augmented generation (RAG) has shown itself to be a powerful method for improving accuracy without sacrificing fluency. However, not all information can be retrieved from text. We propose an approach that uses the analysis of structured data to generate fact sets that are used to guide generation in much the same way that retrieved documents are used in RAG. This analytics augmented generation (AAG) approach supports the ability to utilize standard analytic techniques to generate facts that are then converted to text and passed to an LLM. We present a neurosymbolic platform, Satyrn that leverages AAG to produce accurate, fluent, and coherent reports grounded in large scale databases. In our experiments, we find that Satyrn generates reports in which over 86% accurate claims while maintaining high levels of fluency and coherence, even when using smaller language models such as Mistral-7B, as compared to GPT-4 Code Interpreter in which just 57% of claims are accurate.


Poverty rate prediction using multi-modal survey and earth observation data

Fobi, Simone, Cardona, Manuel, Collins, Elliott, Robinson, Caleb, Ortiz, Anthony, Sederholm, Tina, Dodhia, Rahul, Ferres, Juan Lavista

arXiv.org Artificial Intelligence

This work presents an approach for combining household demographic and living standards survey questions with features derived from satellite imagery to predict the poverty rate of a region. Our approach utilizes visual features obtained from a single-step featurization method applied to freely available 10m/px Sentinel-2 surface reflectance satellite imagery. These visual features are combined with ten survey questions in a proxy means test (PMT) to estimate whether a household is below the poverty line. We show that the inclusion of visual features reduces the mean error in poverty rate estimates from 4.09% to 3.88% over a nationally representative out-of-sample test set. In addition to including satellite imagery features in proxy means tests, we propose an approach for selecting a subset of survey questions that are complementary to the visual features extracted from satellite imagery. Specifically, we design a survey variable selection approach guided by the full survey and image features and use the approach to determine the most relevant set of small survey questions to include in a PMT. We validate the choice of small survey questions in a downstream task of predicting the poverty rate using the small set of questions. This approach results in the best performance -- errors in poverty rate decrease from 4.09% to 3.71%. We show that extracted visual features encode geographic and urbanization differences between regions.


Novel Machine Learning Approach for Predicting Poverty using Temperature and Remote Sensing Data in Ethiopia

Shah, Om, Tallam, Krti

arXiv.org Artificial Intelligence

In many developing nations, a lack of poverty data prevents critical humanitarian organizations from responding to large-scale crises. Currently, socioeconomic surveys are the only method implemented on a large scale for organizations and researchers to measure and track poverty. However, the inability to collect survey data efficiently and inexpensively leads to significant temporal gaps in poverty data; these gaps severely limit the ability of organizational entities to address poverty at its root cause. We propose a transfer learning model based on surface temperature change and remote sensing data to extract features useful for predicting poverty rates. Machine learning, supported by data sources of poverty indicators, has the potential to estimate poverty rates accurately and within strict time constraints. Higher temperatures, as a result of climate change, have caused numerous agricultural obstacles, socioeconomic issues, and environmental disruptions, trapping families in developing countries in cycles of poverty. To find patterns of poverty relating to temperature that have the highest influence on spatial poverty rates, we use remote sensing data. The two-step transfer model predicts the temperature delta from high resolution satellite imagery and then extracts image features useful for predicting poverty. The resulting model achieved 80% accuracy on temperature prediction. This method takes advantage of abundant satellite and temperature data to measure poverty in a manner comparable to the existing survey methods and exceeds similar models of poverty prediction.


A 'new social compact': California commission calls for higher wages, better jobs

Los Angeles Times

California's high poverty rate, low wages and frayed public safety net require a new "social compact" between workers, business and government, according to a report by a blue-ribbon commission that highlights the state's widening inequality. In a report released Monday, the Future of Work Commission, a 21-member body appointed by Gov. Gavin Newsom in August 2019, laid out a grim picture of the challenges facing the world's fifth-largest economy, even as it acknowledged the Golden State's technology leadership, its ethnically and culturally diverse workforce and world-class universities. "Too many Californians have not fully participated in or enjoyed the benefits of the state's broader economic success and the extraordinary wealth generated here, especially workers of color who are disproportionately represented in low-wage industries," the report says. California has the highest poverty rate in the country when accounting for the cost of living, 17.2%, according to the report. Since 2012, wages in the state grew by 14% while home prices increased by 68%.


Linear Regression – Analytics Hub

#artificialintelligence

We live in a world in which machine learning is at the core of the fourth industrial revolution. Linear regression is one of the simplest and most widely used machine learning techniques. There are a plethora of practical applications of linear regression. For example, obesity can be used to predict the chances of developing type 2 diabetes. Or, a student's GPA can be predicted based on the number of hours he/she spends studying.


Poverty Mapping Using Convolutional Neural Networks Trained on High and Medium Resolution Satellite Images, With an Application in Mexico

Babenko, Boris, Hersh, Jonathan, Newhouse, David, Ramakrishnan, Anusha, Swartz, Tom

arXiv.org Machine Learning

Mapping the spatial distribution of poverty in developing countries remains an important and costly challenge. These "poverty maps" are key inputs for poverty targeting, public goods provision, political accountability, and impact evaluation, that are all the more important given the geographic dispersion of the remaining bottom billion severely poor individuals. In this paper we train Convolutional Neural Networks (CNNs) to estimate poverty directly from high and medium resolution satellite images. We use both Planet and Digital Globe imagery with spatial resolutions of 3-5 sq. m. and 50 sq. cm. respectively, covering all 2 million sq. km. of Mexico. Benchmark poverty estimates come from the 2014 MCS-ENIGH combined with the 2015 Intercensus and are used to estimate poverty rates for 2,456 Mexican municipalities. CNNs are trained using the 896 municipalities in the 2014 MCS-ENIGH. We experiment with several architectures (GoogleNet, VGG) and use GoogleNet as a final architecture where weights are fine-tuned from ImageNet. We find that 1) the best models, which incorporate satellite-estimated land use as a predictor, explain approximately 57% of the variation in poverty in a validation sample of 10 percent of MCS-ENIGH municipalities; 2) Across all MCS-ENIGH municipalities explanatory power reduces to 44% in a CNN prediction and landcover model; 3) Predicted poverty from the CNN predictions alone explains 47% of the variation in poverty in the validation sample, and 37% over all MCS-ENIGH municipalities; 4) In urban areas we see slight improvements from using Digital Globe versus Planet imagery, which explain 61% and 54% of poverty variation respectively. We conclude that CNNs can be trained end-to-end on satellite imagery to estimate poverty, although there is much work to be done to understand how the training process influences out of sample validation.