large-scale survey
Towards Mitigating Systematics in Large-Scale Surveys via Few-Shot Optimal Transport-Based Feature Alignment
Hassan, Sultan, Andrianomena, Sambatra, Wandelt, Benjamin D.
Systematics contaminate observables, leading to distribution shifts relative to theoretically simulated signals-posing a major challenge for using pre-trained models to label such observables. Since systematics are often poorly understood and difficult to model, removing them directly and entirely may not be feasible. To address this challenge, we propose a novel method that aligns learned features between in-distribution (ID) and out-of-distribution (OOD) samples by optimizing a feature-alignment loss on the representations extracted from a pre-trained ID model. We first experimentally validate the method on the MNIST dataset using possible alignment losses, including mean squared error and optimal transport, and subsequently apply it to large-scale maps of neutral hydrogen. Our results show that optimal transport is particularly effective at aligning OOD features when parity between ID and OOD samples is unknown, even with limited data-mimicking real-world conditions in extracting information from large-scale surveys. Our code is available at https://github.com/sultan-hassan/feature-alignment-for-OOD-generalization.
- Research Report > New Finding (0.54)
- Research Report > Promising Solution (0.34)
Why is the User Interface a Dark Pattern? : Explainable Auto-Detection and its Analysis
Yada, Yuki, Matsumoto, Tsuneo, Kido, Fuyuko, Yamana, Hayato
Dark patterns are deceptive user interface designs for online services that make users behave in unintended ways. Dark patterns, such as privacy invasion, financial loss, and emotional distress, can harm users. These issues have been the subject of considerable debate in recent years. In this paper, we study interpretable dark pattern auto-detection, that is, why a particular user interface is detected as having dark patterns. First, we trained a model using transformer-based pre-trained language models, BERT, on a text-based dataset for the automatic detection of dark patterns in e-commerce. Then, we applied post-hoc explanation techniques, including local interpretable model agnostic explanation (LIME) and Shapley additive explanations (SHAP), to the trained model, which revealed which terms influence each prediction as a dark pattern. In addition, we extracted and analyzed terms that affected the dark patterns. Our findings may prevent users from being manipulated by dark patterns, and aid in the construction of more equitable internet services. Our code is available at https://github.com/yamanalab/why-darkpattern.
- Asia > Japan > Honshū > Kantō > Tokyo Metropolis Prefecture > Tokyo (0.14)
- North America > United States > California (0.05)
- North America > United States > Minnesota (0.04)
- (3 more...)
- Law (0.95)
- Government (0.95)
- Media > News (0.68)
- Information Technology > Security & Privacy (0.48)
A Large-Scale Survey on the Usability of AI Programming Assistants: Successes and Challenges
Liang, Jenny T., Yang, Chenyang, Myers, Brad A.
The software engineering community recently has witnessed widespread deployment of AI programming assistants, such as GitHub Copilot. However, in practice, developers do not accept AI programming assistants' initial suggestions at a high frequency. This leaves a number of open questions related to the usability of these tools. To understand developers' practices while using these tools and the important usability challenges they face, we administered a survey to a large population of developers and received responses from a diverse set of 410 developers. Through a mix of qualitative and quantitative analyses, we found that developers are most motivated to use AI programming assistants because they help developers reduce key-strokes, finish programming tasks quickly, and recall syntax, but resonate less with using them to help brainstorm potential solutions. We also found the most important reasons why developers do not use these tools are because these tools do not output code that addresses certain functional or non-functional requirements and because developers have trouble controlling the tool to generate the desired output. Our findings have implications for both creators and users of AI programming assistants, such as designing minimal cognitive effort interactions with these tools to reduce distractions for users while they are programming.
- North America > United States > Pennsylvania > Allegheny County > Pittsburgh (0.14)
- Europe > Portugal > Lisbon > Lisbon (0.05)
- South America (0.04)
- (5 more...)
AI in society and culture: decision making and values
Feher, Katalin, Zelenkauskaite, Asta
With the increased expectation of artificial intelligence, academic research face complex questions of human-centred, responsible and trustworthy technology embedded into society and culture. Several academic debates, social consultations and impact studies are available to reveal the key aspects of the changing human-machine ecosystem. To contribute to these studies, hundreds of related academic sources are summarized below regarding AI-driven decisions and valuable AI. In details, sociocultural filters, taxonomy of human-machine decisions and perspectives of value-based AI are in the focus of this literature review. For better understanding, it is proposed to invite stakeholders in the prepared large-scale survey about the next generation AI that investigates issues that go beyond the technology.
- North America > United States > Pennsylvania > Philadelphia County > Philadelphia (0.05)
- North America > United States > Hawaii > Honolulu County > Honolulu (0.05)
- Europe > Hungary > Budapest > Budapest (0.05)
- Research Report (0.40)
- Overview (0.35)
Artificial Intelligence Is Not The Future Of Work; It's Already Here
Business pundits trumpet AI as the future for U.S. employment, but a large-scale survey of U.S. workers indicates that more than 32% are already exposed to some form of AI in their jobs. An additional 6% of workers will begin using AI tools for the first time in 2019. Optimized Workforce – a crowd-sourced think tank that studies the intersection of technology and employment – surveyed more than 10,000 U.S. workers to understand the time they spend on specific tasks, the technologies they work with, and the technologies they will deploy next year to help with those tasks. The survey sampled workers from 19 of the 20 Census Bureau NAICS codes and all of the Bureau of Labor Statistics' top-level occupational codes. The findings, released in a report available on the think tank's Web site, titled "AI Opportunity Report 2018: Which Industries Are Investing in AI? Which Ones Should Be?" reveal that AI-enabled document classification and document creation technologies lead all AI penetration and will continue to see strong investment in 2019.
Can a robot pass a university entrance exam?, Noriko Arai @TEDx
Why you should listen Noriko Arai is the program director of an AI challenge, Todai Robot Project, which asks the question: Can AI get into the University of Tokyo? The project aims to visualize both the possibilities and the limitation of current AI by setting a concrete goal: a software system that can pass university entrance exams. In 2015 and 2016, Todai Robot achieved top 20 percent in the exams, and passed more than 70 percent of the universities in Japan. The inventor of Reading Skill Test, in 2017 Arai conducted a large-scale survey on reading skills of high and junior high school students with Japan's Ministry of Education. The results revealed that more than half of junior high school students fail to comprehend sentences sampled from their textbooks.