fl 0
Analyzing Political Bias in LLMs via Target-Oriented Sentiment Classification
Elbouanani, Akram, Dufraisse, Evan, Popescu, Adrian
Political biases encoded by LLMs might have detrimental effects on downstream applications. Existing bias analysis methods rely on small-size intermediate tasks (questionnaire answering or political content generation) and rely on the LLMs themselves for analysis, thus propagating bias. We propose a new approach leveraging the observation that LLM sentiment predictions vary with the target entity in the same sentence. We define an entropy-based inconsistency metric to encode this prediction variability. We insert 1319 demographically and politically diverse politician names in 450 political sentences and predict target-oriented sentiment using seven models in six widely spoken languages. We observe inconsistencies in all tested combinations and aggregate them in a statistically robust analysis at different granularity levels. We observe positive and negative bias toward left and far-right politicians and positive correlations between politicians with similar alignment. Bias intensity is higher for Western languages than for others. Larger models exhibit stronger and more consistent biases and reduce discrepancies between similar languages. We partially mitigate LLM unreliability in target-oriented sentiment classification (TSC) by replacing politician names with fictional but plausible counterparts.
- Asia > Thailand > Bangkok > Bangkok (0.04)
- North America > Mexico > Mexico City > Mexico City (0.04)
- North America > Canada > Ontario > Toronto (0.04)
- (9 more...)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (0.69)
- Government > Regional Government > North America Government > United States Government (1.00)
- Government > Regional Government > Europe Government > France Government (0.67)
- Media > News (0.67)
PMFL: Partial Meta-Federated Learning for heterogeneous tasks and its applications on real-world medical records
Zhang, Tianyi, Zhang, Shirui, Chen, Ziwei, Liu, Dianbo
Federated machine learning is a versatile and flexible tool to utilize distributed data from different sources, especially when communication technology develops rapidly and an unprecedented amount of data could be collected on mobile devices nowadays. Federated learning method exploits not only the data but the computational power of all devices in the network to achieve more efficient model training. Nevertheless, while most traditional federated learning methods work well for homogeneous data and tasks, adapting the method to a different heterogeneous data and task distribution is challenging. This limitation has constrained the applications of federated learning in real-world contexts, especially in healthcare settings. Inspired by the fundamental idea of meta-learning, in this study we propose a new algorithm, which is an integration of federated learning and meta-learning, to tackle this issue. In addition, owing to the advantage of transfer learning for model generalization, we further improve our algorithm by introducing partial parameter sharing. We name this method partial meta-federated learning (PMFL). Finally, we apply the algorithms to two medical datasets. We show that our algorithm could obtain the fastest training speed and achieve the best performance when dealing with heterogeneous medical datasets.
- North America > Canada > Quebec (0.04)
- Asia > China > Beijing > Beijing (0.04)
- North America > United States > Arizona (0.04)
- (2 more...)
- Health & Medicine > Health Care Technology > Medical Record (1.00)
- Education (1.00)
- Information Technology > Security & Privacy (0.93)
- Health & Medicine > Diagnostic Medicine > Imaging (0.68)
ReGrAt: Regularization in Graphs using Attention to handle class imbalance
Kirtane, Neeraja, Chelladurai, Jeshuren, Ravindran, Balaraman, Tendulkar, Ashish
Node classification is an important task to solve in graph-based learning. Even though a lot of work has been done in this field, imbalance is neglected. Real-world data is not perfect, and is imbalanced in representations most of the times. Apart from text and images, data can be represented using graphs, and thus addressing the imbalance in graphs has become of paramount importance. In the context of node classification, one class has less examples than others. Changing data composition is a popular way to address the imbalance in node classification. This is done by resampling the data to balance the dataset. However, that can sometimes lead to loss of information or add noise to the dataset. Therefore, in this work, we implicitly solve the problem by changing the model loss. Specifically, we study how attention networks can help tackle imbalance. Moreover, we observe that using a regularizer to assign larger weights to minority nodes helps to mitigate this imbalance. We achieve State of the Art results than the existing methods on several standard citation benchmark datasets.
Efficient TBox Reasoning with Value Restrictions using the $\mathcal{FL}_{o}$wer reasoner
Baader, Franz, Koopmann, Patrick, Michel, Friedrich, Turhan, Anni-Yasmin, Zarrieß, Benjamin
The inexpressive Description Logic (DL) $\mathcal{FL}_0$, which has conjunction and value restriction as its only concept constructors, had fallen into disrepute when it turned out that reasoning in $\mathcal{FL}_0$ w.r.t. general TBoxes is ExpTime-complete, i.e., as hard as in the considerably more expressive logic $\mathcal{ALC}$. In this paper, we rehabilitate $\mathcal{FL}_0$ by presenting a dedicated subsumption algorithm for $\mathcal{FL}_0$, which is much simpler than the tableau-based algorithms employed by highly optimized DL reasoners. Our experiments show that the performance of our novel algorithm, as prototypically implemented in our $\mathcal{FL}_o$wer reasoner, compares very well with that of the highly optimized reasoners. $\mathcal{FL}_o$wer can also deal with ontologies written in the extension $\mathcal{FL}_{\bot}$ of $\mathcal{FL}_0$ with the top and the bottom concept by employing a polynomial-time reduction, shown in this paper, which eliminates top and bottom. We also investigate the complexity of reasoning in DLs related to the Horn-fragments of $\mathcal{FL}_0$ and $\mathcal{FL}_{\bot}$.
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Oceania > Australia > New South Wales > Sydney (0.04)
- North America > United States > Massachusetts > Suffolk County > Boston (0.04)
- (2 more...)
Ontology Evolution Under Semantic Constraints
Grau, Bernardo Cuenca (University of Oxford) | Jimenez-Ruiz, Ernesto (University of Oxford) | Kharlamov, Evgeny (Free University of Bozen-Bolzano) | Zheleznyakov, Dmitriy (Free University of Bozen-Bolzano)
The dynamic nature of ontology development has motivated the formal study of ontology evolution problems. This paper presents a logical framework that enables fine-grained investigation of evolution problems at a deductive level. In our framework, the optimal evolutions of an ontology O are those ontologies O′ that maximally preserve both the structure of O and its entailments in a given preservation language. We show that our framework is compatible with the postulates of Belief Revision, and we investigate the existence of optimal evolutions in various settings. In particular, we present first results on TBox-level revision and contraction in the EL and FL0 families of Description Logics.