As NASA sets its eyes on the Moon and beyond to begin a new chapter in human space exploration, the agency is working to develop more autonomous spacecraft using commercial off-the-shelf computer systems that will be better able to withstand the detrimental effects of space radiation, which can cause glitches in normal off-the-shelf processors.

While current computer systems aboard the international space station and space shuttle use radiation-resistant computer chips, these so-called rad-hard components are expensive, slow and consume lots of energy as a trade-off for the extra radiation protection.

NASA would like to increase onboard computer performance and data analysis for future exploration and science missions around the solar system.

That is why NASA’s New Millennium Program is sponsoring the Environmentally Adaptive Fault-Tolerant Computing System (EAFTC) project. The goal is to cluster faster, less expensive and less power-hungry consumer processors on an onboard computer laboratory that can effectively operate in the presence of space radiation.

Honeywell Aerospace of Clearwater, Fla., will be the technology developer on the computer payload, which NASA estimates will cost about $10 million.

“In a spacecraft, the most precious commodities are mass and power. We’re talking about having more capability using commercial processors for less power,” said Raphael Some, a microelectronics and avionics technologist for the New Millennium Program. “We cannot afford to build these future systems from rad-hard parts.”

More capability and autonomy

In order to increase the performance capacity of space computers, technicians are looking to take off-the-shelf commercial processors and link them together into a computer cluster on board future NASA spacecraft. Each of the processors will have an attached Field Programmable Gate Array, a single-hardware component in which technicians are able to reconfigure the wiring to run specific software algorithms more efficiently, Some said.

This ability leads to a 10- to 1,000-fold improvement in performance, Some said, allowing for more comprehensive data analysis that will increase spacecraft autonomy.

Current transmissions sent to Earth from distant spacecraft contain scientific data in raw form, which has to be put together and analyzed by scientists on Earth. Due to limited data link bandwidth, this can be a timely process and allow for few changes to be made to mission parameters in a timely manner, Some said.

To compress this data and make the most of the limited communication bandwidth available on a spacecraft, Some said an onboard computer cluster could process and send more comprehensible data to Earth for quicker analysis.

“We are essentially looking to take a computer cluster and move it from a terrestrial laboratory into the spacecraft,” Some said. “We want to transmit knowledge rather than data.”

Some used the example of an orbiting spacecraft studying the surface of Mars. If the spacecraft detected a melting ice field, it could be programmed to recognize what is scientifically important and perhaps alter its course to pass over that ice field in a shorter period of time than originally planned or focus more of its instruments on the field. “We need something on board that can recognize a scientifically interesting event and decide if it’s important and change mission parameters to take advantage of the opportunity to study,” he said.

Trade-offs

While off-the-shelf processors offer increased performance and efficiency at a lower cost versus rad-hard components, radiation is a major concern for commercial technology that is not designed to withstand single-event upsets, when protons or heavy ions from the solar wind or galactic cosmic rays slam into onboard spacecraft electronics, deposit a charge and cause data glitches to occur.

“Over the last few generations, just serendipitously, things we’ve done to improve commercial parts have made them more resistant to radiation, except for these single-event upsets,” Some noted.

This is one area where having multiple commercial processors comes into play. There are several methods to increase fault-tolerance (error detection) using multiple processors, but while the computer cluster would be adaptable to devote more power to fault-tolerance methods in high-radiation environments or allocate more power to data throughput in low-radiation settings, there always will be a trade-off, Some said.

“This machine is adaptable. It can sense the environment or be told what the environment is and what the single-event upset rate is and then adapt…,” Some said. “But there’s always a trade-off in terms of efficiency, energy or time taken to do it.”

Fault-tolerance

One common fault-tolerance method is called Triple Modular Redundancy, where three processors are all executing the same calculation, and the values through every step of the calculation are compared.

Using Triple Modular Redundancy, it is easy to spot an error that occurred in one processor if the output is different from the other two. “It is highly unlikely I would have the exact same error at the same time in two processors,” Some said. But while that kind of redundancy is an effective method, it still uses three times the amount of energy and mass that is desirable to a single processor, he said.

Another option is using two processors where checkpoints are set up frequently throughout a calculation and stored in memory, called a Checkpoint Roll Back scheme. If there is a discrepancy between two outputs, it will not be clear which one is right, but the computers can be reset to a previous checkpoint and work the calculation from there.

While saving one-third of the energy and mass compared to Triple Modular Redundancy, the drawback of the Checkpoint Rollback Scheme is that it uses up a lot of time, Some said.

“There are many things you can do to detect and recover from errors but there’s always a trade-off,” Some said. “We’re trying to find low-cost ways to provide this capability.”

Testing

The EAFTC project is part of New Millennium’s Space Technology 8 (ST8) mission, which will test a suite of advanced technologies in space. The mission, which includes EAFTC and three other new-technology projects, is slated to launch in 2008.

While the EAFTC is not intended to operate in extremely high-radiation environments such as within the Van Allen radiation belt around the Earth, “the ST8 mission is set to go through several high-radiation areas so we can validate the fault-tolerance performance,” Some said.