Goto

Collaborating Authors

 human contribution


The Architecture of AI Transformation: Four Strategic Patterns and an Emerging Frontier

Wolfe, Diana A., Choe, Alice, Kidd, Fergus

arXiv.org Artificial Intelligence

Despite extensive investment in artificial intelligence, 95% of enterprises report no measurable profit impact from AI deployments (MIT, 2025). In this theoretical paper, we argue that this gap reflects paradigmatic lock-in that channels AI into incremental optimization rather than structural transformation. Using a cross-case analysis, we propose a 2x2 framework that reconceptualizes AI strategy along two independent dimensions: the degree of transformation achieved (incremental to transformational) and the treatment of human contribution (reduced to amplified). The framework surfaces four patterns now dominant in practice: individual augmentation, process automation, workforce substitution, and a less deployed frontier of collaborative intelligence. Evidence shows that the first three dimensions reinforce legacy work models and yield localized gains without durable value capture. Realizing collaborative intelligence requires three mechanisms: complementarity (pairing distinct human and machine strengths), co-evolution (mutual adaptation through interaction), and boundary-setting (human determination of ethical and strategic parameters). Complementarity and boundary-setting are observable in regulated and high-stakes domains; co-evolution is largely absent, which helps explain limited system-level impact. Our findings in a case study analysis illustrated that advancing toward collaborative intelligence requires material restructuring of roles, governance, and data architecture rather than additional tools. The framework reframes AI transformation as an organizational design challenge: moving from optimizing the division of labor between humans and machines to architecting their convergence, with implications for operating models, workforce development, and the future of work.


Measuring Human Involvement in AI-Generated Text: A Case Study on Academic Writing

Guo, Yuchen, Dou, Zhicheng, Nguyen, Huy H., Chang, Ching-Chun, Sugawara, Saku, Echizen, Isao

arXiv.org Artificial Intelligence

Content creation has dramatically progressed with the rapid advancement of large language models like ChatGPT and Claude. While this progress has greatly enhanced various aspects of life and work, it has also negatively affected certain areas of society. A recent survey revealed that nearly 30% of college students use generative AI to help write academic papers and reports. Most countermeasures treat the detection of AI-generated text as a binary classification task and thus lack robustness. This approach overlooks human involvement in the generation of content even though human-machine collaboration is becoming mainstream. Besides generating entire texts, people may use machines to complete or revise texts. Such human involvement varies case by case, which makes binary classification a less than satisfactory approach. We refer to this situation as participation detection obfuscation. We propose using BERTScore as a metric to measure human involvement in the generation process and a multi-task RoBERTa-based regressor trained on a token classification task to address this problem. To evaluate the effectiveness of this approach, we simulated academic-based scenarios and created a continuous dataset reflecting various levels of human involvement. All of the existing detectors we examined failed to detect the level of human involvement on this dataset. Our method, however, succeeded (F1 score of 0.9423 and a regressor mean squared error of 0.004). Moreover, it demonstrated some generalizability across generative models. Our code is available at https://github.com/gyc-nii/CAS-CS-and-dual-head-detector


Measuring Human Contribution in AI-Assisted Content Generation

Xie, Yueqi, Qi, Tao, Yi, Jingwei, Whalen, Ryan, Huang, Junming, Ding, Qian, Xie, Yu, Xie, Xing, Wu, Fangzhao

arXiv.org Artificial Intelligence

With the growing prevalence of generative artificial intelligence (AI), an increasing amount of content is no longer exclusively generated by humans but by generative AI models with human guidance. This shift presents notable challenges for the delineation of originality due to the varying degrees of human contribution in AI-assisted works. This study raises the research question of measuring human contribution in AI-assisted content generation and introduces a framework to address this question that is grounded in information theory. By calculating mutual information between human input and AI-assisted output relative to self-information of AI-assisted output, we quantify the proportional information contribution of humans in content generation. Our experimental results demonstrate that the proposed measure effectively discriminates between varying degrees of human contribution across multiple creative domains. We hope that this work lays a foundation for measuring human contributions in AI-assisted content generation in the era of generative AI.


Learning When to Ask for Help: Transferring Human Knowledge through Part-Time Demonstration

Igbinedion, Ifueko, Karaman, Sertac

arXiv.org Artificial Intelligence

Robots operating alongside humans often encounter unfamiliar environments that make autonomous task completion challenging. Though improving models and increasing dataset size can enhance a robot's performance in unseen environments, dataset generation and model refinement may be impractical in every unfamiliar environment. Approaches that utilize human demonstration through manual operation can aid in generalizing to these unfamiliar environments, but often require significant human effort and expertise to achieve satisfactory task performance. To address these challenges, we propose leveraging part-time human interaction for redirection of robots during failed task execution. We train a lightweight help policy that allows robots to learn when to proceed autonomously or request human assistance at times of uncertainty. By incorporating part-time human intervention, robots recover quickly from their mistakes. Our best performing policy yields a 20 percent increase in path-length weighted success with only a 21 percent human interaction ratio. This approach provides a practical means for robots to interact and learn from humans in real-world settings, facilitating effective task completion without the need for significant human intervention.


10 AI Solutions for Next-Level Customer Experience

#artificialintelligence

Gone are the days when customer service agents have to deal with tedious customer relationship tasks. With a lot of customer service delegated to machines, companies can deliver next-level customer experience. Chatbots are automated AI systems of messaging that enable conversations between customers and machines. Customers can direct their questions to AI-powered chatbots and have them attended to without human contribution. Chatbots, one of the best Artificial Intelligence solutions, are designed to quickly attend to customer needs and give them a seamless experience in the process.


How Insurance Companies Can Use AI to Thrive

#artificialintelligence

The era of artificial intelligence (AI) is here and is transforming the insurance industry. Will insurers pivot or perish? The answer lies in how they approach this big disruption. Insurance industry executives need to understand that AI's strength is also its flaw. Not being constrained by rules allows for speedy learning, but it also means that AI is learning without the context that more specific programming or human intelligence and judgment would provide.


The Role of AI in Wisdom of the Crowds for the Social Construction of Knowledge on Sustainability

Maher, Mary Lou (University of Maryland)

AAAI Conferences

One of the original applications of crowdsourcing the construction of knowledge is Wikipedia, which relies entirely on people to contribute, extend, and modify the representation of knowledge. This paper presents a case for combining AI and wisdom of the crowds for the social construction of knowledge. Our social-computational approach to collective intelligence combines the strengths of human cognitive diversity in producing content and the capabilities of an AI, through methods such as topic modeling, to link and synthesize across these human contributions. In addition to drawing from established domains such as Wikipedia for inspiration and guidance, we present the design of a system that incorporates AI into wisdom of the crowds to develop a knowledge base on sustainability. In this setting the AI plays the role of scholar, as might many of the other participants, drawing connections and synthesizing across contributions. We close with a general discussion, speculating on educational implications and other roles that an AI can play within an otherwise collective human intelligence.