Goto

Collaborating Authors

 Sienkiewicz, Julian


The Dark Patterns of Personalized Persuasion in Large Language Models: Exposing Persuasive Linguistic Features for Big Five Personality Traits in LLMs Responses

arXiv.org Artificial Intelligence

This study explores how the Large Language Models (LLMs) adjust linguistic features to create personalized persuasive outputs. While research showed that LLMs personalize outputs, a gap remains in understanding the linguistic features of their persuasive capabilities. We identified 13 linguistic features crucial for influencing personalities across different levels of the Big Five model of personality. We analyzed how prompts with personality trait information influenced the output of 19 LLMs across five model families. The findings show that models use more anxiety-related words for neuroticism, increase achievement-related words for conscientiousness, and employ fewer cognitive processes words for openness to experience. Some model families excel at adapting language for openness to experience, others for conscientiousness, while only one model adapts language for neuroticism. Our findings show how LLMs tailor responses based on personality cues in prompts, indicating their potential to create persuasive content affecting the mind and well-being of the recipients.


Big Tech influence over AI research revisited: memetic analysis of attribution of ideas to affiliation

arXiv.org Artificial Intelligence

There exists a growing discourse around the domination of Big Tech on the landscape of artificial intelligence (AI) research, yet our comprehension of this phenomenon remains cursory. This paper aims to broaden and deepen our understanding of Big Tech's reach and power within AI research. It highlights the dominance not merely in terms of sheer publication volume but rather in the propagation of new ideas or \textit{memes}. Current studies often oversimplify the concept of influence to the share of affiliations in academic papers, typically sourced from limited databases such as arXiv or specific academic conferences. The main goal of this paper is to unravel the specific nuances of such influence, determining which AI ideas are predominantly driven by Big Tech entities. By employing network and memetic analysis on AI-oriented paper abstracts and their citation network, we are able to grasp a deeper insight into this phenomenon. By utilizing two databases: OpenAlex and S2ORC, we are able to perform such analysis on a much bigger scale than previous attempts. Our findings suggest, that while Big Tech-affiliated papers are disproportionately more cited in some areas, the most cited papers are those affiliated with both Big Tech and Academia. Focusing on the most contagious memes, their attribution to specific affiliation groups (Big Tech, Academia, mixed affiliation) seems to be equally distributed between those three groups. This suggests that the notion of Big Tech domination over AI research is oversimplified in the discourse. Ultimately, this more nuanced understanding of Big Tech's and Academia's influence could inform a more symbiotic alliance between these stakeholders which would better serve the dual goals of societal welfare and the scientific integrity of AI research.


HADES: Homologous Automated Document Exploration and Summarization

arXiv.org Artificial Intelligence

This paper introduces HADES, a novel tool for automatic comparative documents with similar structures. HADES is designed to streamline the work of professionals dealing with large volumes of documents, such as policy documents, legal acts, and scientific papers. The tool employs a multi-step pipeline that begins with processing PDF documents using topic modeling, summarization, and analysis of the most important words for each topic. The process concludes with an interactive web app with visualizations that facilitate the comparison of the documents. HADES has the potential to significantly improve the productivity of professionals dealing with high volumes of documents, reducing the time and effort required to complete tasks related to comparative document analysis. Our package is publically available on GitHub.


MAIR: Framework for mining relationships between research articles, strategies, and regulations in the field of explainable artificial intelligence

arXiv.org Artificial Intelligence

Artificial intelligence methods are playing an increasingly important role in global economics. The growing importance and, at the same time, the risks associated with AI are driving a vibrant discussion about the responsible development of artificial intelligence. Examples of negative consequences resulting from black-box models show that interpretability, transparency, safety, and fairness are essential yet sometimes overlooked components of AI systems. Efforts to secure the responsible development of AI systems are ongoing at many levels and in many communities, both policymakers and academics (Gill et al., 2020; Barredo Arrieta et al., 2020; Baniecki et al., 2020). Naturally, national strategies for the development of responsible AI, sector regulations related to the safe use of AI, as well as academic research related to new methods that ensure the transparency and verifiability of models are all interrelated. Strategies are based on discussions in the scientific community and are often sources of inspiration for subsequent research work. The need for regulation stems from risks, often identified by the research community, but when regulations are created, they become a powerful tool for developing methods to meet expectations. Scientific work in AI is particularly strongly connected to the economy, which means that a large part of it responds to the threads identified in regulations and strategies.


Emotional Analysis of Blogs and Forums Data

arXiv.org Artificial Intelligence

The Blogs dataset is a subset of Recent years have resulted in several well motivated the Blogs06 [16] collection of blog posts from 06/12/2005 and carefully described studies coping with the problem to 21/02/2006. Only posts attracting more than 100 of opinion formation and its spreading [1]. This kind of comments were extracted, as these apparently initialised research usually aimed at qualitative descriptions of some non-trivial discussions. Both datasets have similar structures.