Collaborating Authors


Los Angeles average gas price leads the nation at a record-breaking $6.08

Los Angeles Times

On Wednesday the average cost for a gallon of regular gas in Los Angeles reached $6.08, leaping 2.3 cents overnight and breaking a record set earlier this year, according to the latest data from AAA. Los Angeles is not alone in its pain as the cost of gas spikes across the nation. And according to analysts, the switch to a more expensive summer blend for other parts of the country promises the hurt will not stop anytime soon. The average cost for regular gas is more than $4 for nearly every state. According to AAA, the national average is $4.56, but California leads the nation with an average of $6.05.

No gas rebates in sight as average prices in L.A. barrel toward $6 a gallon -- again

Los Angeles Times

Experts say a perfect storm of supply-and-demand issues are sending gas prices in Los Angeles soaring again, with the price-per-gallon increasing more than 14 cents in the last 16 days, according to the latest fuel prices tracked by AAA. L.A. fuel prices are again inching toward a $6-a-gallon record set in March. The average price of a gallon of regular gasoline in the Los Angeles area is currently $5.91, with plenty of stations charging well over that. A year ago the price was $4.16. Overnight, the price jumped 2.2 cents, the highest level it has risen since February.

Spectroscopy and Chemometrics/Machine-Learning News Weekly #17, 2022


LINK "Feasibility of Near-Infrared Spectroscopy for Rapid Detection of Available Nitrogen in Vermiculite Substrates in Desert Facility Agriculture" LINK "Establishment of a Nondestructive Analysis Method for Lignan Content in Sesame using Near Infrared Reflectance Spectroscopy" LINK "Near Infrared Spectroscopy: A useful technique for inline monitoring of the enzyme catalyzed biosynthesis of third-generation biodiesel from waste cooking oil" LINK "A Study on Nitrogen Concentration Detection Model of Rubber Leaf Based on Spatial-Spectral Information with NIR Hyperspectral Data" LINK "Design and Performance of a Near-Infrared Spectroscopy Measurement System for In-Field Alfalfa Moisture Measurement" LINK "Estimating Forest Soil Properties for Humus Assessment--Is Vis-NIR the Way to Go?" LINK "Association and solubility of chlorophenols in CCl4: MIR/NIR spectroscopic and DFT study" LINK "Prediction of rhodinol content in Java citronella oil using NIR spectroscopy in the initial stage ...

New RL technique achieves superior performance in control tasks


This article is part of our coverage of the latest in AI research. Reinforcement learning is one of the fascinating fields of computer science, and it has proven useful in solving some of the toughest challenges of artificial intelligence and robotics. Some scientists believe that reinforcement learning will play a key role in cracking the enigma of human-level artificial intelligence. But many hurdles stand between current reinforcement learning systems and a possible path toward more general and robust forms of AI. Many RL systems struggle with long-term planning, training-sample efficiency, transferring knowledge to new tasks, dealing with the inconsistencies of input signals and rewards, and other challenges that occur in real-world applications.

Optimization of a Thermal Cracking Reactor Using Genetic Algorithm and Water Cycle Algorithm


With the global production of 150 million tons in 2016, ethylene is one of the most significant building blocks in today's chemical industry. Most ethylene is now produced in cracking furnaces by thermal cracking of fossil feedstocks with steam. This process consumes around 8% of the main energy used in the petrochemical industry, making it the single most energy-intensive process in the chemical industry. This paper studies a tubular thermal cracking reactor fed by propane and the molecular mechanism of the reaction within the reactor. After developing the reaction model, the existing issues, such as the reaction, flow, momentum, and energy, were resolved by applying heat to the outer tube wall.

Maximizing information from chemical engineering data sets: Applications to machine learning Machine Learning

It is well-documented how artificial intelligence can have (and already is having) a big impact on chemical engineering. But classical machine learning approaches may be weak for many chemical engineering applications. This review discusses how challenging data characteristics arise in chemical engineering applications. We identify four characteristics of data arising in chemical engineering applications that make applying classical artificial intelligence approaches difficult: (1) high variance, low volume data, (2) low variance, high volume data, (3) noisy/corrupt/missing data, and (4) restricted data with physics-based limitations. For each of these four data characteristics, we discuss applications where these data characteristics arise and show how current chemical engineering research is extending the fields of data science and machine learning to incorporate these challenges. Finally, we identify several challenges for future research.

Learning-theoretic Perspectives on MPC via Competitive Control


Since the 1980s, Model Predictive Control (MPC) has been one of the most influential and popular process control methods in industries. The key idea of MPC is straightforward: with a finite look-ahead window of the future, MPC optimizes a finite-time optimal control problem at each time step, but only implements/executes the current timeslot and then optimizes again at the next time step, repeatedly. Actually, the second part "only implements the current timeslot and reoptimizes at each time step" is one of the reasons MPC was not that popular before the 1980s -- iteratively solving complex optimal control problems at high frequency was such a luxury task before computational power took off. Here is a trajectory tracking problem to explain how MPC works (visualized in the figure below). Having a proper $Q$ is critical for the stability and performance of MPC.

Inferential Theory for Granular Instrumental Variables in High Dimensions Machine Learning

The Granular Instrumental Variables (GIV) methodology exploits panels with factor error structures to construct instruments to estimate structural time series models with endogeneity even after controlling for latent factors. We extend the GIV methodology in several dimensions. First, we extend the identification procedure to a large $N$ and large $T$ framework, which depends on the asymptotic Herfindahl index of the size distribution of $N$ cross-sectional units. Second, we treat both the factors and loadings as unknown and show that the sampling error in the estimated instrument and factors is negligible when considering the limiting distribution of the structural parameters. Third, we show that the sampling error in the high-dimensional precision matrix is negligible in our estimation algorithm. Fourth, we overidentify the structural parameters with additional constructed instruments, which leads to efficiency gains. Monte Carlo evidence is presented to support our asymptotic theory and application to the global crude oil market leads to new results.

Spatiotemporal Costmap Inference for MPC via Deep Inverse Reinforcement Learning Artificial Intelligence

It can be difficult to autonomously produce driver behavior so that it appears natural to other traffic participants. Through Inverse Reinforcement Learning (IRL), we can automate this process by learning the underlying reward function from human demonstrations. We propose a new IRL algorithm that learns a goal-conditioned spatiotemporal reward function. The resulting costmap is used by Model Predictive Controllers (MPCs) to perform a task without any hand-designing or hand-tuning of the cost function. We evaluate our proposed Goal-conditioned SpatioTemporal Zeroing Maximum Entropy Deep IRL (GSTZ)-MEDIRL framework together with MPC in the CARLA simulator for autonomous driving, lane keeping, and lane changing tasks in a challenging dense traffic highway scenario. Our proposed methods show higher success rates compared to other baseline methods including behavior cloning, state-of-the-art RL policies, and MPC with a learning-based behavior prediction model.

A novel method for error analysis in radiation thermometry with application to industrial furnaces Artificial Intelligence

Accurate temperature measurements are essential for the proper monitoring and control of industrial furnaces. However, measurement uncertainty is a risk for such a critical parameter. Certain instrumental and environmental errors must be considered when using spectral-band radiation thermometry techniques, such as the uncertainty in the emissivity of the target surface, reflected radiation from surrounding objects, or atmospheric absorption and emission, to name a few. Undesired contributions to measured radiation can be isolated using measurement models, also known as error-correction models. This paper presents a methodology for budgeting significant sources of error and uncertainty during temperature measurements in a petrochemical furnace scenario. A continuous monitoring system is also presented, aided by a deep-learning-based measurement correction model, to allow domain experts to analyze the furnace's operation in real-time. To validate the proposed system's functionality, a real-world application case in a petrochemical plant is presented. The proposed solution demonstrates the viability of precise industrial furnace monitoring, thereby increasing operational security and improving the efficiency of such energy-intensive systems.