Goto

Collaborating Authors

 individual user


Looking for Fairness in Recommender Systems

Logé, Cécile

arXiv.org Artificial Intelligence

Recommender systems can be found everywhere today, shaping our everyday experience whenever we're consuming content, ordering food, buying groceries online, or even just reading the news. Let's imagine we're in the process of building a recommender system to make content suggestions to users on social media. When thinking about fairness, it becomes clear there are several perspectives to consider: the users asking for tailored suggestions, the content creators hoping for some limelight, and society at large, navigating the repercussions of algorithmic recommendations. A shared fairness concern across all three is the emergence of filter bubbles, a side-effect that takes place when recommender systems are almost "too good", making recommendations so tailored that users become inadvertently confined to a narrow set of opinions/themes and isolated from alternative ideas. From the user's perspective, this is akin to manipulation. From the small content creator's perspective, this is an obstacle preventing them access to a whole range of potential fans. From society's perspective, the potential consequences are far-reaching, influencing collective opinions, social behavior and political decisions. How can our recommender system be fine-tuned to avoid the creation of filter bubbles, and ensure a more inclusive and diverse content landscape? Approaching this problem involves defining one (or more) performance metric to represent diversity, and tweaking our recommender system's performance through the lens of fairness. By incorporating this metric into our evaluation framework, we aim to strike a balance between personalized recommendations and the broader societal goal of fostering rich and varied cultures and points of view.


Seagate unveils massive 30 terabyte HAMR-powered hard drives

PCWorld

Human beings have a hard time dealing with numbers that get really big. The speed of light, the number of atoms in apparently small amounts of matter, the energy being burned every time you ask ChatGPT how many days there are in July. It doesn't really fit into our meat brains. Take, for example, Seagate's latest industrial hard drive, which holds 30 terabytes of data. The new Exos M and IronWolf Pro are the most dense drives single I've ever seen in the standard form factor, narrowly beating out existing 28TB models by leveraging Seagate's innovative Heat-Assisted Magnetic Recording (HAMR) technology.


A Survey on Personalized and Pluralistic Preference Alignment in Large Language Models

Xie, Zhouhang, Wu, Junda, Shen, Yiran, Xia, Yu, Li, Xintong, Chang, Aaron, Rossi, Ryan, Kumar, Sachin, Majumder, Bodhisattwa Prasad, Shang, Jingbo, Ammanabrolu, Prithviraj, McAuley, Julian

arXiv.org Artificial Intelligence

Personalized preference alignment for large language models (LLMs), the process of tailoring LLMs to individual users' preferences, is an emerging research direction spanning the area of NLP and personalization. In this survey, we present an analysis of works on personalized alignment and modeling for LLMs. We introduce a taxonomy of preference alignment techniques, including training time, inference time, and additionally, user-modeling based methods. We provide analysis and discussion on the strengths and limitations of each group of techniques and then cover evaluation, benchmarks, as well as open problems in the field.


The Digital Insider

#artificialintelligence

GitHub has unveiled business usage terms for its GitHub Copilot AI-based coding assistant, making the service available to businesses for $19 per month per user. The company also vowed to keep users' own code safe from retention, storage, or sharing by GitHub. GitHub Copilot for Business gives organizations license management, organization-wide policy controls, and privacy features along with licenses for organizations, teams, and individual users. GitHub Copilot, introduced in 2021 as a Visual Studio Code editor extension, offers coding suggestions and functions directly from the user's programming editor or IDE. The AI model behind Copilot is trained on open source code in public repositories.


Extracting personal information from anonymous cell phone data using machine learning

#artificialintelligence

A research team at Illinois Institute of Technology has extracted personal information, specifically protected characteristics like age and gender, from anonymous cell phone data using machine learning and artificial intelligence algorithms, raising questions about data security. The research was conducted by an interdisciplinary team of three Illinois Tech faculty including Vijay K. Gurbani, research associate professor of computer science; Matthew Shapiro, professor of political science; and Yuri Mansury, associate professor of social sciences. They were joined by Illinois Tech alumni Lida Kuang (M.S. CS '19) and Samruda Pobbathi (M.S. CS '19) who worked with Gurbani to publish "Predicting Age and Gender from Network Telemetry: Implications for Privacy and Impact on Policy" in PLOS One. The researchers used data from a Latin American cell phone company to successfully estimate the gender and age of individual users through their private communications with relative ease. The team developed a neural network model to estimate gender with 67% accuracy, which outperforms modern techniques such as decision tree, random forest, and gradient boosting models by a significant margin.


On-Device Model Fine-Tuning with Label Correction in Recommender Systems

Ding, Yucheng, Niu, Chaoyue, Wu, Fan, Tang, Shaojie, Lyu, Chengfei, Chen, Guihai

arXiv.org Artificial Intelligence

To meet the practical requirements of low latency, low cost, and good privacy in online intelligent services, more and more deep learning models are offloaded from the cloud to mobile devices. To further deal with cross-device data heterogeneity, the offloaded models normally need to be fine-tuned with each individual user's local samples before being put into real-time inference. In this work, we focus on the fundamental click-through rate (CTR) prediction task in recommender systems and study how to effectively and efficiently perform on-device fine-tuning. We first identify the bottleneck issue that each individual user's local CTR (i.e., the ratio of positive samples in the local dataset for fine-tuning) tends to deviate from the global CTR (i.e., the ratio of positive samples in all the users' mixed datasets on the cloud for training out the initial model). We further demonstrate that such a CTR drift problem makes on-device fine-tuning even harmful to item ranking. We thus propose a novel label correction method, which requires each user only to change the labels of the local samples ahead of on-device fine-tuning and can well align the locally prior CTR with the global CTR. The offline evaluation results over three datasets and five CTR prediction models as well as the online A/B testing results in Mobile Taobao demonstrate the necessity of label correction in on-device fine-tuning and also reveal the improvement over cloud-based learning without fine-tuning.


How Hoxhunt successfully applied machine learning to security awareness, by Ira Winkler - Hoxhunt

#artificialintelligence

There are so many buzzwords and trends in the security awareness industry that it is hard to determine what is useful and what is a gimmick. Every vendor out there has some sort of promise that they have some special characteristic about their product that makes it a revolutionary improvement to your security awareness posture that no other product can accomplish. After reviewing the Hoxhunt solution, it is safe to say that they actually do provide something unique that can really move the needle with your organization's security awareness posture. Machine learning and artificial intelligence are typically buzzwords and technologies that vendors tout as making a product unique. The reality is that machine learning and AI can be useful, however, they are just underlying technologies.


Data Observability and Its Importance in Determining Intent - DataScienceCentral.com

#artificialintelligence

In my blog "The Importance of Determining Intent", I discussed the importance of determining user intent to create an "intelligent" user or stakeholder experience. Analytics-centric organizations specialize in determining and codifying a user's intent in order to provide a more engaging, relevant, hyper-personalized experience (Figure 1). Figure 1: Using "Intent Determination" to Create an Intelligent Customer Experience To create an "intelligent" user experience requires leveraging AI/ML to analyze a deep history of the user's interactions to determine the user's intentions, and then coupling those intentions with current trends, patterns, and relationships to match those intentions with a deep understanding of the available content to recommend the most relevant action. We reviewed how digital marketing companies, such as those featured in Figure 1, determine user intent. These companies accumulate a deep history of each individual user's interactions including what sites or content they visited or viewed, how long they spent with each site or piece of content, what they clicked on, what they did not click on, and their contextual search requests. They analyze the user's interaction history, and match that with current trends and behaviors of similar cohorts, to determine and codify (think propensity scores) the user's intentions (areas of interest) that drives real-time recommendation decisions.


Aesthetic Preference Recognition as a Potential Authentication Factor

#artificialintelligence

A new paper from Israel has proposed an authentication scheme based on a user's aesthetic preferences, wherein the user calibrates the system one time by rating images, thereby generating a private'domain' of that individual's visual and visual/conceptual predilections. Later, the user would be challenged at authentication time to match their recorded preferences against novel image sets. From the trials of a'game-ized' AEbA implementation – left, the user rates the aesthetic quality of an image; right, a score is signaled at the end of a stage in the active application phase of the trials . The system is titled Aesthetic Evaluation-based Authentication (AEbA), and is a submission to the 2022 USENIX Annual Technical Conference in California in July. AEbA was trialed by the paper's researchers in the form of a game series, where participants were required to train the system and then rate new images that accorded with their registered tastes.


Mavromoustakos Blom

AAAI Conferences

In this paper we propose an approach for personalising the space in which a game is played (i.e., levels) dependent on classifications of the user's facial expression -- to the end of tailoring the affective game experience to the individual user. Our approach is aimed at online game personalisation, i.e., the game experience is personalised during actual play of the game. A key insight of this paper is that game personalisation techniques can leverage novel computer vision-based techniques to unobtrusively infer player experiences automatically based on facial expression analysis. Specifically, to the end of tailoring the affective game experience to the individual user, in this paper we (1) leverage the proven InSight facial expression recognition SDK as a model of the user's affective state InSight, and (2) employ this model for guiding the online game personalisation process. User studies that validate the game personalisation approach in the actual video game Infinite Mario Bros. reveal that it provides an effective basis for converging to an appropriate affective state for the individual human player.