sequential training
- North America > United States > Utah > Salt Lake County > Salt Lake City (0.04)
- Europe > Switzerland > Zürich > Zürich (0.04)
- Europe > Switzerland > Basel-City > Basel (0.04)
- Europe > Switzerland > Zürich > Zürich (0.04)
- North America > United States > Utah > Salt Lake County > Salt Lake City (0.04)
- Europe > Switzerland > Basel-City > Basel (0.04)
- Europe > United Kingdom > England > Greater London > London (0.05)
- North America > United States > Pennsylvania > Philadelphia County > Philadelphia (0.04)
- North America > Canada (0.04)
- North America > United States > California > Santa Clara County > Stanford (0.05)
- North America > United States > California > Santa Clara County > Palo Alto (0.05)
- Europe > United Kingdom > England > Greater London > London (0.05)
- North America > Canada > British Columbia > Metro Vancouver Regional District > Vancouver (0.04)
Elastic Weight Consolidation for Knowledge Graph Continual Learning: An Empirical Evaluation
Knowledge graphs (KGs) require continual updates as new information emerges, but neural embedding models suffer from catastrophic forgetting when learning new tasks sequentially. We evaluate Elastic Weight Consolidation (EWC), a regularization-based continual learning method, on KG link prediction using TransE embeddings on FB15k-237. Across multiple experiments with five random seeds, we find that EWC reduces catastrophic forgetting from 12.62% to 6.85%, a 45.7% reduction compared to naive sequential training. We observe that the task partitioning strategy affects the magnitude of forgetting: relation-based partitioning (grouping triples by relation type) exhibits 9.8 percentage points higher forgetting than randomly partitioned tasks (12.62% vs 2.81%), suggesting that task construction influences evaluation outcomes. While focused on a single embedding model and dataset, our results demonstrate that EWC effectively mitigates catastrophic forgetting in KG continual learning and highlight the importance of evaluation protocol design.
- North America > United States > New York > New York County > New York City (0.14)
- North America > Canada > Alberta > Census Division No. 6 > Calgary Metropolitan Region > Calgary (0.04)
- Europe > Switzerland (0.04)
- (2 more...)
- Oceania > New Zealand (0.04)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- North America > United States > California > Los Angeles County > Long Beach (0.04)
- North America > United States > Utah > Salt Lake County > Salt Lake City (0.04)
- Europe > Switzerland > Zürich > Zürich (0.04)
- Europe > Switzerland > Basel-City > Basel (0.04)
- Europe > Switzerland > Zürich > Zürich (0.04)
- North America > United States > Utah > Salt Lake County > Salt Lake City (0.04)
- Europe > Switzerland > Basel-City > Basel (0.04)
- Europe > United Kingdom > England > Greater London > London (0.05)
- North America > United States > Pennsylvania > Philadelphia County > Philadelphia (0.04)
- North America > United States > New York (0.04)
- North America > Canada > British Columbia > Metro Vancouver Regional District > Vancouver (0.04)
KDH-MLTC: Knowledge Distillation for Healthcare Multi-Label Text Classification
The increasing volume of healthcare textual data requires computationally efficient, yet highly accurate classification approaches able to handle the nuanced and complex nature of medical terminology. This research presents Knowledge Distillation for Healthcare Multi - Label Text Classification (KDH - MLTC), a framework leveraging model compr ession and Large Language Models (LLMs). The proposed approach addresses conventional healthcare Multi - Label Text Classification (MLTC) challenges by integrating knowledge distillation and sequential fine - tuning, subsequently optimized through Particle Swa rm Optimization (PSO) for hyperparameter tuning. KDH - MLTC transfers knowledge from a more complex teacher LLM ( i.e., BERT) to a lighter student LLM ( i.e., DistilBERT) through sequential training adapted to MLTC that preserves the teacher's learned information while significantly reducing computational requirements. As a result, the classification is enabled to be conducted locally, making it suitable for healthcare textual data characterized by sensitivity and, therefore, ensuring HIPAA compliance. The e xpe riments conducted on three medical literature datasets of different sizes, sampled from the Hallmark of Cancer (HoC) dataset, demonstrate that KDH - MLTC achieves superior performance compared to existing approaches, particularly for the largest dataset, reaching an F1 score of 82.70% 0.89%. Additionally, statistical validation and an ablation study ar e carried out, proving the robustness of KDH - MLTC. Furthermore, the PSO - based hyperparameter optimization process allow ed the identification of optimal configurations. The proposed approach contributes to healthcare text classification research, balancing efficiency requirements in resource - constrained healthcare settings with satisfactory accuracy demands.
- North America > United States > New York > Broome County > Binghamton (0.04)
- North America > United States > Oregon (0.04)
- Asia > Singapore (0.04)
- Information Technology > Artificial Intelligence > Natural Language > Text Classification (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (1.00)
- (2 more...)