Goto

Collaborating Authors

 Johnson, Breanna


Tight Constraint Prediction of Six-Degree-of-Freedom Transformer-based Powered Descent Guidance

arXiv.org Artificial Intelligence

This work introduces Transformer-based Successive Convexification (T-SCvx), an extension of Transformer-based Powered Descent Guidance (T-PDG), generalizable for efficient six-degree-of-freedom (DoF) fuel-optimal powered descent trajectory generation. Our approach significantly enhances the sample efficiency and solution quality for nonconvex-powered descent guidance by employing a rotation invariant transformation of the sampled dataset. T-PDG was previously applied to the 3-DoF minimum fuel powered descent guidance problem, improving solution times by up to an order of magnitude compared to lossless convexification (LCvx). By learning to predict the set of tight or active constraints at the optimal control problem's solution, Transformer-based Successive Convexification (T-SCvx) creates the minimal reduced-size problem initialized with only the tight constraints, then uses the solution of this reduced problem to warm-start the direct optimization solver. 6-DoF powered descent guidance is known to be challenging to solve quickly and reliably due to the nonlinear and non-convex nature of the problem, the discretization scheme heavily influencing solution validity, and reference trajectory initialization determining algorithm convergence or divergence. Our contributions in this work address these challenges by extending T-PDG to learn the set of tight constraints for the successive convexification (SCvx) formulation of the 6-DoF powered descent guidance problem. In addition to reducing the problem size, feasible and locally optimal reference trajectories are also learned to facilitate convergence from the initial guess. T-SCvx enables onboard computation of real-time guidance trajectories, demonstrated by a 6-DoF Mars powered landing application problem.


Diffusion Policies for Generative Modeling of Spacecraft Trajectories

arXiv.org Artificial Intelligence

Despite its promise and the tremendous advances in nonlinear optimization solvers in recent years, trajectory optimization has primarily been constrained to offline usage due to the limited compute capabilities of radiation hardened flight computers [3]. However, with a flurry of proposed mission concepts that call for increasingly greater on-board autonomy [4], bridging this gap in the state-of-practice is necessary to allow for scaling current trajectory design techniques for future missions. Recently, researchers have turned to machine learning and data-driven techniques as a promising method for reducing the runtimes necessary for solving challenging constrained optimization problems [5, 6]. Such approaches entail learning what is known as the problem-to-solution mapping between the problem parameters that vary between repeated instances of solving the trajectory optimization problem to the full optimization solution and these works typically use a Deep Neural Network (DNN) to model this mapping [7-9]. Given parameters of new instances of the trajectory optimization problem, this problem-to-solution mapping can be used online to yield candidate trajectories to warm start the nonlinear optimization solver and this warm start can enable significant solution speed ups. One shortcoming of these aforementioned data-driven approaches is that they have limited scope of use and the learned problem-to-solution mapping only applies for one specific trajectory optimization formulation. With a change to the mission design specifications that yields, e.g., a different optimization constraint, a new problem-to-solution mapping has to be learned offline and this necessitates generating a new dataset of solved trajectory optimization problems. To this end, our work explores the use of compositional diffusion modeling to allow for generalizable learning of the problem-to-solution mapping and equip mission designers with the ability to interleave different learned models to satisfy a rich set of trajectory design specifications. Compositional diffusion modeling enables training of a model to both sample and plan from.


Improving Computational Efficiency for Powered Descent Guidance via Transformer-based Tight Constraint Prediction

arXiv.org Artificial Intelligence

In this work, we present Transformer-based Powered Descent Guidance (T-PDG), a scalable algorithm for reducing the computational complexity of the direct optimization formulation of the spacecraft powered descent guidance problem. T-PDG uses data from prior runs of trajectory optimization algorithms to train a transformer neural network, which accurately predicts the relationship between problem parameters and the globally optimal solution for the powered descent guidance problem. The solution is encoded as the set of tight constraints corresponding to the constrained minimum-cost trajectory and the optimal final time of landing. By leveraging the attention mechanism of transformer neural networks, large sequences of time series data can be accurately predicted when given only the spacecraft state and landing site parameters. When applied to the real problem of Mars powered descent guidance, T-PDG reduces the time for computing the 3 degree of freedom fuel-optimal trajectory, when compared to lossless convexification, from an order of 1-8 seconds to less than 500 milliseconds. A safe and optimal solution is guaranteed by including a feasibility check in T-PDG before returning the final trajectory.