Lu, Xinzheng
ChatHouseDiffusion: Prompt-Guided Generation and Editing of Floor Plans
Qin, Sizhong, He, Chengyu, Chen, Qiaoyun, Yang, Sen, Liao, Wenjie, Gu, Yi, Lu, Xinzheng
The generation and editing of floor plans are critical in architectural planning, requiring a high degree of flexibility and efficiency. Existing methods demand extensive input information and lack the capability for interactive adaptation to user modifications. This paper introduces ChatHouseDiffusion, which leverages large language models (LLMs) to interpret natural language input, employs graphormer to encode topological relationships, and uses diffusion models to flexibly generate and edit floor plans. This approach allows iterative design adjustments based on user ideas, significantly enhancing design efficiency. Compared to existing models, ChatHouseDiffusion achieves higher Intersection over Union (IoU) scores, permitting precise, localized adjustments without the need for complete redesigns, thus offering greater practicality. Experiments demonstrate that our model not only strictly adheres to user specifications but also facilitates a more intuitive design process through its interactive capabilities.
Hysteretic Behavior Simulation Based on Pyramid Neural Network:Principle, Network Architecture, Case Study and Explanation
Xu, Yongjia, Lu, Xinzheng, Fei, Yifan, Huang, Yuli
An accurate and efficient simulation of the hysteretic behavior of materials and components is essential for structural analysis. The surrogate model based on neural networks shows significant potential in balancing efficiency and accuracy. However, its serial information flow and prediction based on single-level features adversely affect the network performance. Therefore, a weighted stacked pyramid neural network architecture is proposed herein. This network establishes a pyramid architecture by introducing multi-level shortcuts to integrate features directly in the output module. In addition, a weighted stacked strategy is proposed to enhance the conventional feature fusion method. Subsequently, the redesigned architectures are compared with other commonly used network architectures. Results show that the redesigned architectures outperform the alternatives in 87.5% of cases. Meanwhile, the long and short-term memory abilities of different basic network architectures are analyzed through a specially designed experiment, which could provide valuable suggestions for network selection.
Iterative self-transfer learning: A general methodology for response time-history prediction based on small dataset
Xu, Yongjia, Lu, Xinzheng, Fei, Yifan, Huang, Yuli
There are numerous advantages of deep neural network surrogate modeling for response time-history prediction. However, due to the high cost of refined numerical simulations and actual experiments, the lack of data has become an unavoidable bottleneck in practical applications. An iterative self-transfer learning method for training neural networks based on small datasets is proposed in this study. A new mapping-based transfer learning network, named as deep adaptation network with three branches for regression (DAN-TR), is proposed. A general iterative network training strategy is developed by coupling DAN-TR and the pseudo-label (PL) strategy, and the establishment of corresponding datasets is also discussed. Finally, a complex component is selected as a case study. The results show that the proposed method can improve the model performance by near an order of magnitude on small datasets without the need of external labeled samples, well behaved pre-trained models, additional artificial labeling, and complex physical/mathematical analysis.