absorptivity
Can Large Language Models Learn the Physics of Metamaterials? An Empirical Study with ChatGPT
Lu, Darui, Deng, Yang, Malof, Jordan M., Padilla, Willie J.
Large language models (LLMs) such as ChatGPT, Gemini, LlaMa, and Claude are trained on massive quantities of text parsed from the internet and have shown a remarkable ability to respond to complex prompts in a manner often indistinguishable from humans. We present a LLM fine-tuned on up to 40,000 data that can predict electromagnetic spectra over a range of frequencies given a text prompt that only specifies the metasurface geometry. Results are compared to conventional machine learning approaches including feed-forward neural networks, random forest, linear regression, and K-nearest neighbor (KNN). Remarkably, the fine-tuned LLM (FT-LLM) achieves a lower error across all dataset sizes explored compared to all machine learning approaches including a deep neural network. We also demonstrate the LLM's ability to solve inverse problems by providing the geometry necessary to achieve a desired spectrum. LLMs possess some advantages over humans that may give them benefits for research, including the ability to process enormous amounts of data, find hidden patterns in data, and operate in higher-dimensional spaces. We propose that fine-tuning LLMs on large datasets specific to a field allows them to grasp the nuances of that domain, making them valuable tools for research and analysis.
- North America > United States > Montana > Missoula County > Missoula (0.14)
- Asia > Middle East > Jordan (0.05)
- North America > United States > North Carolina > Durham County > Durham (0.04)
Data Mining for Faster, Interpretable Solutions to Inverse Problems: A Case Study Using Additive Manufacturing
Kamath, Chandrika, Franzman, Juliette, Ponmalai, Ravi
Solving inverse problems, where we find the input values that result in desired values of outputs, can be challenging. The solution process is often computationally expensive and it can be difficult to interpret the solution in high-dimensional input spaces. In this paper, we use a problem from additive manufacturing to address these two issues with the intent of making it easier to solve inverse problems and exploit their results. First, focusing on Gaussian process surrogates that are used to solve inverse problems, we describe how a simple modification to the idea of tapering can substantially speed up the surrogate without losing accuracy in prediction. Second, we demonstrate that Kohonen self-organizing maps can be used to visualize and interpret the solution to the inverse problem in the high-dimensional input space. For our data set, as not all input dimensions are equally important, we show that using weighted distances results in a better organized map that makes the relationships among the inputs obvious.
- North America > United States > Texas > Travis County > Austin (0.14)
- North America > United States > Pennsylvania (0.14)
- Machinery > Industrial Machinery (0.71)
- Government > Regional Government > North America Government > United States Government (0.46)
- Energy > Oil & Gas > Upstream (0.46)