tessellation
Tessellation Localized Transfer learning for nonparametric regression
Halconruy, Hélène, Bobbia, Benjamin, Lejamtel, Paul
Transfer learning aims to improve performance on a target task by leveraging information from related source tasks. We propose a nonparametric regression transfer learning framework that explicitly models heterogeneity in the source-target relationship. Our approach relies on a local transfer assumption: the covariate space is partitioned into finitely many cells such that, within each cell, the target regression function can be expressed as a low-complexity transformation of the source regression function. This localized structure enables effective transfer where similarity is present while limiting negative transfer elsewhere. We introduce estimators that jointly learn the local transfer functions and the target regression, together with fully data-driven procedures that adapt to unknown partition structure and transfer strength. We establish sharp minimax rates for target regression estimation, showing that local transfer can mitigate the curse of dimensionality by exploiting reduced functional complexity. Our theoretical guarantees take the form of oracle inequalities that decompose excess risk into estimation and approximation terms, ensuring robustness to model misspecification. Numerical experiments illustrate the benefits of the proposed approach.
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Europe > France > Île-de-France > Hauts-de-Seine > Nanterre (0.04)
- Europe > France > Occitanie > Haute-Garonne > Toulouse (0.04)
- North America > United States > California > Orange County > Irvine (0.04)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.28)
- North America > Canada (0.04)
- Asia > China (0.04)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning (0.94)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Uncertainty > Bayesian Inference (0.93)
- Information Technology > Artificial Intelligence > Machine Learning > Learning Graphical Models > Directed Networks > Bayesian Learning (0.68)
Infinite folds
But her passion is for paper--with no scissors. Today, she's a tessellation expert who teaches, invents new designs, and writes papers on the underlying math. Madonna Yoder '17 photographed in her studio Ross Mantle When Madonna Yoder '17 was eight years old, she learned how to fold a square piece of paper over and over and over again. After about 16 folds, she held a bird in her hands. The first time she pulled the tail of a flapping crane, she says, she realized: . That first piece was an origami classic, folded by kids at summer camp for generations and many people's first foray into the art form.
- North America > United States > Virginia (0.05)
- South America > Peru (0.05)
- North America > United States > Massachusetts (0.04)
- (4 more...)
Fold your own tessellation
Yoder recommends printing the pattern on paper in between normal printer paper and cardstock in weight, making sure it folds in straight lines (not too thick), folds back and forth easily on the same line (not too thin), and is crisp enough to make a satisfying snapping noise when you shake it. Her favorite paper isSkytone, which is commonly used to print certificates and fancy envelopes. Once you have your crease pattern on a sheet of paper, cut out the hexagon that contains the pattern. Yoder recommends using a straightedge and blade on a cutting mat instead of scissors, whether that means an X-Acto knife and a ruler on a sheet of cardboard or a quilting ruler and rotary cutter on a fabric cutting mat. The next step is folding the background grid of black lines that the pattern uses as references. Assuming you've cut out your hexagon precisely, you can use the edge of the hexagon and the printed lines to make your creases, or you can fold as if there were no lines printed by folding the hexagon in half (edge to opposite edge) and then folding those edges in to the center to make quarter lines, first in one direction and then in the other two.
- Information Technology > Communications > Social Media (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (0.32)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.28)
- North America > Canada (0.04)
- Asia > China (0.04)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning (0.94)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Uncertainty > Bayesian Inference (0.93)
- Information Technology > Artificial Intelligence > Machine Learning > Learning Graphical Models > Directed Networks > Bayesian Learning (0.68)
GCN-TULHOR: Trajectory-User Linking Leveraging GCNs and Higher-Order Spatial Representations
Tran, Khoa, Gupta, Pranav, Papagelis, Manos
Trajectory-user linking (TUL) aims to associate anonymized trajectories with the users who generated them, which is crucial for personalized recommendations, privacy-preserving analytics, and secure location-based services. Existing methods struggle with sparse data, incomplete routes, and limited modeling of complex spatial dependencies, often relying on low-level check-in data or ignoring spatial patterns. In this paper, we introduced GCN-TULHOR, a method that transforms raw location data into higher-order mobility flow representations using hexagonal tessellation, reducing data sparsity and capturing richer spatial semantics, and integrating Graph Convolutional Networks (GCNs). Our approach converts both sparse check-in and continuous GPS trajectory data into unified higher-order flow representations, mitigating sparsity while capturing deeper semantic information. The GCN layer explicitly models complex spatial relationships and non-local dependencies without requiring side information such as timestamps or points of interest. Experiments on six real-world datasets show consistent improvements over classical baselines, RNN- and Transformer-based models, and the TULHOR method in accuracy, precision, recall, and F1-score. GCN-TULHOR achieves 1-8% relative gains in accuracy and F1. Sensitivity analysis identifies an optimal setup with a single GCN layer and 512-dimensional embeddings. The integration of GCNs enhances spatial learning and improves generalizability across mobility data. This work highlights the value of combining graph-based spatial learning with sequential modeling, offering a robust and scalable solution for TUL with applications in recommendations, urban planning, and security.
- North America > United States > New York (0.04)
- North America > United States > District of Columbia > Washington (0.04)