complement
Synthetic-to-Real Pose Estimation with Geometric Reconstruction Qiuxia Lin 1 Kerui Gu1 Linlin Y ang 2, 3 Angela Y ao 1 1
The warping estimation module W is based on an hourglass with five conv3 3 - bn - relu - pool2 2 in the encoders and five upsample2 2 - conv3 3 - bn - relu blocks in the decoders. In G, we use the Johnson architecture [ 3 ] with two down-sampling blocks, six residual-blocks and two up-sampling blocks. The design follows [ 7 ]. The inputs are the base image, displacement field, and inpainting map. It downsampled 4 and upsampled 4 to get the output, i.e. the reconstructed image.
- Europe (0.15)
- North America > United States > California > Santa Clara County > Palo Alto (0.04)
- North America > Canada (0.04)
- Information Technology > Artificial Intelligence > Natural Language (1.00)
- Information Technology > Data Science > Data Mining (0.93)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks (0.68)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Semantic Networks (0.52)
ConE: ConeEmbeddingsforMulti-HopReasoning overKnowledgeGraphs Appendix
Figure 1: Fourteen queries used in the experiments. They do not contain personally identifiable information or offensive content. All the models are implemented in Pytorch [5] and based on the official implementation of BETAE [6]2 for a fair comparison. Forall the modules using multi-layer perceptron (MLP), we use a three-layer MLP with 1600 hidden neurons and ReLU activation. We apply dropout to the min function inCardMin and search the dropout rate in{0.05,0.10,0.15,0.20}.
1764183ef03fc7324eb58c3842bd9a57-Supplemental.pdf
A.1 Datasets Table S1 reports summary statistics of the datasets used in this paper. For SMNIST and ZINC, we use the same pre-processing steps and data splits as in [10]. Errica et al.[11] show that drawing conclusions based on some of these datasets can be problematic as structure-agnostic baselines achievehigher performance than traditional GNNs. However,intheir assessment, NCI1 is the only chemical dataset on which GNNs beat baselines. A.2 Models We implement all models using the PyTorch Geometric Library [12].
A Complement to Neural Networks for Anisotropic Inelasticity at Finite Strains
We propose a complement to constitutive modeling that augments neural networks with material principles to capture anisotropy and inelasticity at finite strains. The key element is a dual potential that governs dissipation, consistently incorporates anisotropy, and-unlike conventional convex formulations-satisfies the dissipation inequality without requiring convexity. Our neural network architecture employs invariant-based input representations in terms of mixed elastic, inelastic and structural tensors. It adapts Input Convex Neural Networks, and introduces Input Monotonic Neural Networks to broaden the admissible potential class. To bypass exponential-map time integration in the finite strain regime and stabilize the training of inelastic materials, we employ recurrent Liquid Neural Networks. The approach is evaluated at both material point and structural scales. We benchmark against recurrent models without physical constraints and validate predictions of deformation and reaction forces for unseen boundary value problems. In all cases, the method delivers accurate and stable performance beyond the training regime. The neural network and finite element implementations are available as open-source and are accessible to the public via https://doi.org/10.5281/zenodo.17199965.
- North America > United States > California > Santa Clara County > Stanford (0.04)
- North America > United States > California > Santa Clara County > Palo Alto (0.04)
- Europe > Germany > Bavaria > Middle Franconia > Nuremberg (0.04)
- (3 more...)
A large-scale, unsupervised pipeline for automatic corpus annotation using LLMs: variation and change in the English consider construction
Morin, Cameron, Larsson, Matti Marttinen
As natural language corpora expand at an unprecedented rate, manual annotation remains a significant methodological bottleneck in corpus linguistic work. We address this challenge by presenting a scalable, unsupervised pipeline for automating grammatical annotation in voluminous corpora using large language models (LLMs). Unlike previous supervised and iterative approaches, our method employs a four-phase workflow: prompt engineering, pre-hoc evaluation, automated batch processing, and post-hoc validation. We demonstrate the pipeline's accessibility and effectiveness through a diachronic case study of variation in the English consider construction. Using GPT-5 through the OpenAI API, we annotate 143,933 sentences from the Corpus of Historical American English (COHA) in under 60 hours, achieving 98%+ accuracy on two sophisticated annotation procedures. Our results suggest that LLMs can perform a range of data preparation tasks at scale with minimal human intervention, opening new possibilities for corpus-based research, though implementation requires attention to costs, licensing, and other ethical considerations.
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- Europe > Sweden > Vaestra Goetaland > Gothenburg (0.04)
- Law (0.46)
- Energy (0.46)
- Health & Medicine > Therapeutic Area (0.34)
- Asia > Singapore (0.05)
- Asia > Middle East > Jordan (0.04)