Drone racing prepares neural-network AI for space
Drones are being raced against the clock at Delft University of Technology's "Cyber Zoo" to test the performance of neural-network-based AI control systems planned for next-generation space missions.
The research—undertaken by ESA's Advanced Concepts Team together with the Micro Air Vehicle Laboratory, MAVLab, of TUDelft—is detailed in the latest issue of Science Robotics.
"Through a long-term collaboration, we've been looking into the use of trainable neural networks for the autonomous oversight of all kinds of demanding spacecraft maneuvers, such as interplanetary transfers, surface landings and dockings," notes Dario Izzo, scientific coordinator of ESA's ACT.
"In space every onboard resource must be utilized as efficiently as possible—including propellant, available energy, computing resources, and often time. Such a neural network approach could enable optimal onboard operations, boosting mission autonomy and robustness. But we needed a way to test it in the real world, ahead of planning actual space missions.
"That's when we settled on drone racing as the ideal gym environment to test end-to-end neural architectures on real robotic platforms, to increase confidence in their future use in space."
Drones have been competing to achieve the best time through a set course within the Cyber Zoo at TU Delft, a 10x10 m test area maintained by the University's Faculty of Aerospace Engineering, ESA's partner in this research. Human-steered "Micro Air Vehicle" quadcopters were alternated with autonomous counterparts with neural networks trained in various ways.
"The traditional way that spacecraft maneuvers work is that they are planned in detail on the ground then uploaded to the spacecraft to be carried out," explains ACT Young Graduate Trainee Sebastien Origer. "Essentially, when it comes to mission Guidance and Control, the Guidance part occurs on the ground, while the Control part is undertaken by the spacecraft."
The space environment is inherently unpredictable, however, with the potential for all kinds of unforeseen factors and noise, such as gravitational variations, atmospheric turbulence or planetary bodies that turn out to be shaped differently from on-ground modeling.
Whenever the spacecraft deviates from its planned path for whatever reason, its control system works to return it to the set profile. The problem is that such an approach can be quite costly in resource terms, requiring a whole set of brute force corrections.
Sebastien adds, "Our alternative end-to-end Guidance & Control Networks, G&C Nets, approach involves all the work taking place on the spacecraft. Instead of sticking a single set course, the spacecraft continuously replans its optimal trajectory, starting from the current position it finds itself at, which proves to be much more efficient."
In computer simulations, neural nets composed of interlinked neurons—mimicking the setup of animal brains—performed well when trained using "behavioral cloning," based on prolonged exposure to expert examples. But then came the question of how to build trust in this approach in the real world. At this point, the researchers turned to drones.
"There's quite a lot of synergies between drones and spacecraft, although the dynamics involved in flying drones are much faster and noisier," comments Dario.
More information: Dario Izzo et al, Optimality principles in spacecraft neural guidance and control, Science Robotics (2024). DOI: 10.1126/scirobotics.adi6421
Journal information: Science Robotics
Provided by European Space Agency