Smart as the Mars Curiosity mission has been about landing and finding its own way on a distant world, the rover is pretty brainless when it comes to doing the science that it was sent 567 million kilometers to carry out. That has to change if future rover missions are to make discoveries further out in the solar system, scientists say.

The change has now begun with the development of a new camera that can do more than just take pictures of alien rocks – it also thinks about what the pictures signify so the rover can decide on its own whether to keep exploring a particular site, or move on.

“We currently have a micromanaging approach to space exploration,” said senior researcher Kiri Wagstaff, a computer scientist and geologist at the Jet Propulsion Laboratory (JPL) in Pasadena, Calif. “While this suffices for our rovers on Mars, it works less and less well the further you get from the Earth. If you want to get ambitious and go to Europa and asteroids and comets, you need more and more autonomy to even make that feasible.”

To help future rover and space missions spend less time waiting for instructions from Earth, Wagstaff and her colleagues developed an advanced two-lens camera, called TextureCam. Although Curiosity and other rovers can already, on their own, distinguish rocks from other objects in photos they take, they must send images all the way to Earth for scientific analysis of a particular rock. This process costs time and limits the potential scientific scope of rovers’ missions. TextureCam can do the analysis by itself.

The work is detailed in Geophysical Research Letters, a publication of the American Geophysical Union.

Micromanaging on Mars

At the beginning of each Martian day, called a sol, scientists on Earth upload an agenda to a Mars rover. This scientific schedule details nearly all of the rover’s movements: roll forward so many meters, snap a photo, scoop a soil sample, run rudimentary tests on it and move on.

Even moving at light speed, instructions from Earth take about 20 minutes to reach the surface of Mars. This 40-minute roundtrip makes real-time control of the rover impossible. On Jupiter’s moon Europa, where astrobiologists suspect extraterrestrial life could exist, the delay balloons to over 90 minutes.

“Right now for the rovers, each day is planned out on Earth based on the images the rover took the previous day,” said Wagstaff. “This is a huge limitation and one of the main bottlenecks for exploration with these spacecraft.”

While researchers recently introduced autonomous navigation to the Curiosity rover, its scientific objectives are still determined by the images it transmits back to Earth. Mars-to-Earth communication costs precious power and trickles at a bandwidth of around 0.012 megabits per second–about 250 times slower than a 3G cellphone network.

Mars orbiters can help speed up the data transfer rate, though the satellites only orbit into correct alignment a few short minutes each day. Curiosity’s constrained connection limits the number of Martian images it can send back to Earth.

“If the rover itself could prioritize what’s scientifically important, it would suddenly have the capability to take more images than it knows it can send back. That goes hand in hand with its ability to discover new things that weren’t anticipated,” said Wagstaff.

Recognizing rocks

When TextureCam’s stereo cameras snap 3D images, a special processor separate from the rover’s main computer analyzes the pictures. By recognizing texturess in the photos, the processor distinguishes between sand, rocks and sky. The processor then uses the size and distance to rocks in the picture to determine if any are scientifically important layered rocks.

The system’s built-in processor avoids straining the rover’s busy main processor. When TextureCam spots an interesting rock, it can either upload a high-resolution image back to Earth or send a message to the main processor to move toward the rock and take a sample.

“You do have to provide it with some initial training, just like you would with a human, where you give it example images of what to look for,” said Wagstaff. “But once it knows what to look for, it can make the same decisions we currently do on Earth.”

From deserts to planets

In its infancy, Wagstaff and her colleagues trained TextureCam using real Martian images taken by previous rover missions. TextureCam’s training worked similarly to the facial unlock feature available on smartphones and computers: The more examples of interesting rocks it was shown, the better it became at identifying the common features that made the rocks scientifically important. Recently TextureCam was successfully run through its paces in the rocky landscape of the Mojave Desert in Southern California–a useful analog for the Martian surface.

Wagstaff predicts TextureCam could greatly benefit future Mars rovers, such as the Mars 2020 rover, as well as missions to other planets and moons.

Notes for Journalists: Journalists and public information officers (PIOs) of educational and scientific institutions who have registered with AGU can download a PDF copy of this early view article by clicking on this link: http://onlinelibrary.wiley.com/doi/10.1002/grl.50817/abstract

Or, you may order a copy of the final paper by emailing your request to Thomas Sumner at tsumner@agu.org. Please provide your name, the name of your publication, and your phone number.

Neither the paper nor this press release is under embargo. Title: “Smart, texture-sensitive instrument classification for in situ rock and layer analysis”

Authors:

K. L. Wagstaff and D. R. Thompson Machine Learning and Instrument Autonomy, Jet Propulsion Laboratory, California Institute of Technology, Pasadena, California, USA;

W. Abbey and A. Allwood Planetary Chemistry and Astrobiology, Jet Propulsion Laboratory, California Institute of Technology, Pasadena, California, USA;

D. L. Bekker Instrument Flight Software and Ground Support Equipment, Jet Propulsion Laboratory, California Institute of Technology, Pasadena, California, USA;

N. A. Cabrol Space Science Division, NASA Ames Research Center/SETI Institute, Moffett Field, California, USA;

T. Fuchs Mobility and Robotic Systems, Jet Propulsion Laboratory, California Institute of Technology, Pasadena, California, USA;

K. Ortega Distributed and Real-Time Systems, Jet Propulsion Laboratory, California Institute of Technology, Pasadena, California, USA.

Contact information for the authors:

Kiri Wagstaff, Phone: +1 (626) 354-7131, Email: kiri.wagstaff@jpl.nasa.gov