The Mars rovers, developed by NASA, are one of the greatest scientific and space achievements of the last two decades. Using on-board processors less powerful than an iPhone 1, four generations of rovers have roamed the red planet, gathering scientific data, sending back evocative photos, and surviving very harsh conditions.
Perseverance, the most recent rover, was launched on July 30, 2020, and engineers are already planning the next generation of rovers. These missions, while significant, have just scratched the surface (literally and metaphorically) of the planet’s geology, topography, and atmosphere.
“The surface area of Mars is approximately the same as the total area of the land on Earth,” said Masahiro (Hiro) Ono, group lead of the Robotic Surface Mobility Group at the NASA Jet Propulsion Laboratory (JPL) which has led all the Mars rover missions and one of the researchers who developed the software that allows the current rover to operate.
“Imagine, you’re an alien and you know almost nothing about Earth, and you land on seven or eight points on Earth and drive a few hundred kilometers. Does that alien species know enough about Earth?” Ono asked.
“No. If we want to represent the huge diversity of Mars we’ll need more measurements on the ground, and the key is substantially extended distance, hopefully covering thousands of miles.”
Traveling over Mars’ varied, dangerous terrain with limited computational power and a restricted energy diet that only allows the rover to capture and convert as much solar as it can in a single Martian day, or sol, is a significant task.
Sojourner, the first rover, covered 330 feet in 91 sols; Spirit, the second, reached 4.8 miles in roughly five years; Opportunity, 28 miles in 15 years; and Curiosity, more than 12 miles since it landed in 2012.
“Our team is working on Mars robot autonomy to make future rovers more intelligent, to enhance safety, to improve productivity, and in particular to drive faster and farther,” Ono said.
New Hardware, New Possibilities
The Perseverance rover, which launched this summer, runs on BAE Systems Electronics’ RAD 750s radiation-hardened single board processors.
Future missions, on the other hand, could make use of the high-performance, multi-core radiation-hardened processors developed as part of the High Performance Spaceflight Computing (HPSC) programme. With the same amount of power, these devices will deliver nearly a hundred times the computing capacity of present aircraft computers.
The rover has the flexibility of changing the plan onboard instead of just sticking to a sequence of pre-planned options. This is important in case something bad happens or it finds something interesting.
Masahiro (Hiro) Ono
“All of the autonomy that you see on our latest Mars rover is largely human-in-the-loop meaning it requires human interaction to operate,” according to Chris Mattmann, the deputy chief technology and innovation officer at JPL.
“Part of the reason for that is the limits of the processors that are running on them. One of the core missions for these new chips is to do deep learning and machine learning, like we do terrestrially, on board. What are the killer apps given that new computing environment?”
The MAARS (Machine Learning-based Analytics for Autonomous Rover Systems) program, which began three years ago and will end this year, covers a wide range of applications for artificial intelligence. In March 2020, the MAARS project team presented its findings at the IEEE Aerospace Conference. The project was a NASA Software Award finalist.
“Terrestrial high performance computing has enabled incredible breakthroughs in autonomous vehicle navigation, machine learning, and data analysis for Earth-based applications,” the team wrote in their IEEE paper. “The main roadblock to a Mars exploration rollout of such advances is that the best computers are on Earth, while the most valuable data is located on Mars.”
Ono, Mattmann, and their team have been developing two novel capabilities for future Mars rovers, which they call Drive-By Science and Energy-Optimal Autonomous Navigation, using the Maverick2 supercomputer at the Texas Advanced Computing Center (TACC), as well as Amazon Web Services and JPL clusters.
Energy-Optimal Autonomous Navigation
Ono was a member of the Perseverance pathfinding software development team. Perseverance’s software has some machine learning capabilities, but its pathfinding algorithm is still somewhat crude.
“We’d like future rovers to have a human-like ability to see and understand terrain,” Ono said. “For rovers, energy is very important. There’s no paved highway on Mars. The drivability varies substantially based on the terrain for instance beach versus. bedrock. That is not currently considered. Coming up with a path with all of these constraints is complicated, but that’s the level of computation that we can handle with the HPSC or Snapdragon chips. But to do so we’re going to need to change the paradigm a little bit.”
Ono explains that new paradigm as commanding by policy, a middle ground between the human-dictated: “Go from A to B and do C,” and the purely autonomous: “Go do science.”
Pre-planning for a variety of scenarios is part of commanding by policy, which allows the rover to determine what conditions it is confronting and what it should do.
“We use a supercomputer on the ground, where we have infinite computational resources like those at TACC, to develop a plan where a policy is: if X, then do this; if y, then do that,” Ono explained.
“We’ll basically make a huge to-do list and send gigabytes of data to the rover, compressing it in huge tables. Then we’ll use the increased power of the rover to de-compress the policy and execute it.”
Machine learning-derived optimizations are used to create the pre-planned list. The on-board chip can then use those plans to perform inference, which involves collecting inputs from the environment and inserting them into the model that has already been trained. The inference tasks are significantly easier to compute and can be done on a chip similar to those that may accompany future Mars rovers.
“The rover has the flexibility of changing the plan on board instead of just sticking to a sequence of pre-planned options,” Ono said. “This is important in case something bad happens or it finds something interesting.”
Drive-by Science
According to Mattmann, current Mars missions often use tens of photos from the rover per Sol to decide what to do the next day.
“But what if in the future we could use one million image captions instead? That’s the core tenet of Drive-By Science,” he said. “If the rover can return text labels and captions that were scientifically validated, our mission team would have a lot more to go on.”
Mattmann and his colleagues customized Google’s Show and Tell software, a neural picture caption generator that was first released in 2014, for the rover missions, making it the first non-Google deployment of the technology.
The system takes in photographs and generates captions that are understandable to humans. Basic but crucial information such as cardinality (how many rocks, how far away? and features such as vein structure in outcrops near bedrock are included.
“The types of science knowledge that we currently use images for to decide what’s interesting,” Mattmann said.
Planetary geologists have spent the last few years labeling and curating Mars-specific image captions in order to train the model.
“We use the one million captions to find 100 more important things,” Mattmann said. “Using search and information retrieval capabilities, we can prioritize targets. Humans are still in the loop, but they’re getting much more information and are able to search it a lot faster.”
The team’s findings will be published in the September 2020 issue of Planetary and Space Science.
The supercomputers at TACC assisted the JPL team in testing the system. On Maverick 2, the team used 6,700 expert-created labels to train, evaluate, and improve their model.
Future Mars rovers will need to be capable of traveling far further. The Sample Fetch Rover, for example, is slated to be created by the European Space Agency and launched in the late 2020s, with its primary mission being to gather samples dug up by the Mars 2020 rover.
“Those rovers in a period of years would have to drive 10 times further than previous rovers to collect all the samples and to get them to a rendezvous site,” Mattmann said. “We’ll need to be smarter about the way we drive and use energy.”
The new models and algorithms are evaluated on a dirt training field near JPL that acts as an Earth-based approximation for the surface of Mars before being loaded onto a rover heading for orbit.
The researchers created a display that includes an overview map, streaming photos gathered by the rover, and live algorithms operating on the rover, as well as showing the rover executing landscape categorization and captioning on board. They had hoped to complete testing of the new system this spring, but COVID-19 forced the facility to close and postpone testing.
Meanwhile, Ono and his colleagues created AI4Mars, a citizen science program that allows anyone to interpret more than 20,000 photographs collected by the Curiosity rover. These will be used to improve machine learning algorithms’ ability to detect and avoid dangerous terrain. In less than three months, the public has generated 170,000 labels.
“People are excited. It’s an opportunity for people to help,” Ono said. “The labels that people create will help us make the rover safer.”
According to Ono, the attempts to establish a new AI-based paradigm for future autonomous missions can be used to any autonomous space mission, including orbiters, fly-bys, and interstellar probes.
Future rovers could travel considerably further and undertake more science thanks to a mix of more powerful on-board computing power, pre-planned commands computed on high-performance computers like those at TACC, and novel algorithms.