Sir Arthur C. Clarke’s 1945 prediction of the geostationary satellite was dismissed by some Wireless World readers. Twenty years later, on April 6, 1965, the launch of Intelsat 1 proved the famed science-fiction novelist got it largely right. Credit: Wireless World via Lakdiva.org

This op-ed originally appeared in the June 25, 2018 issue of SpaceNews magazine.

In 1945, Arthur C. Clarke accurately predicted that a satellite in a 24-hour orbit some 35,000 kilometers above the ground could sit still in the sky and thus be used as a signal reflector to transmit television signals to widespread areas. In the decades that followed, several geostationary-satellite-based voice-and-data services such as Inmarsat and Intelsat were deployed, and they are still operative today with great success. Nonetheless, the telecommunications industry has continuously sought to further improve the performance of space-based networks. A key consideration is signal propagation latencies which, in the case of GEO orbit, average 300 milliseconds; this is right at the limit for sustaining natural voice communications.

Sir Arthur C. Clarke’s 1945 prediction of the geostationary satellite was dismissed by some Wireless World readers. Twenty years later, on April 6, 1965, the launch of Intelsat 1 proved the famed science-fiction novelist got it largely right. Credit: Wireless World via Lakdiva.org
Sir Arthur C. Clarke’s 1945 prediction of the geostationary satellite was dismissed by some Wireless World readers. Twenty years later, on April 6, 1965, the launch of Intelsat 1 proved the famed science-fiction novelist got it largely right. Credit: Wireless World via Lakdiva.org

Globalstar, a constellation of 24 satellites in Low Earth Orbit (LEO) at 1,400 kilometers above the ground, provided latencies as low as 50 milliseconds. However, this bent-pipe satellite architecture required that each satellite have continuous direct reachability to the ground station gateways that were being linked. Since no ground station could easily be placed on the poles or in the sea, global coverage was not possible. Iridium overcame this limitation by using inter-satellite links to route data between neighboring satellites without intervention of a ground station. These satellites are not just signal repeaters, but actual routers in space.

At the moment, the small satellite sector has positioned itself as one of the fastest-growing industries in the world. The planned deployment of thousands of LEO satellites by OneWeb and SpaceX’s Starlink are proof that the crusade for high throughput, low latency and wide coverage has only started.

At the same time, here on the ground, the proliferation of connected everyday devices — the Internet of Things (IoT) — is gaining momentum. As general as it might sound, “things” here means not only smartphones or computers but also cars, light bulbs, clothes, coffee mugs, or whatever objects one can imagine, most of them with highly limited energy and processing capabilities. Even in this quotidian and highly proximate context, requirements for continuous coverage and low latency are difficult to meet. Also, high throughput is not mandatory in these scenarios given the limited (yet valuable) amount of data produced by objects.

It did not take long for IoT to move to the space domain. New companies such as Kepler, LacunaSpace, and Astrocast are proposing to provide IoT services from space using highly constrained spacecraft such as cubesats. Indeed, instead of struggling with thousands of satellites, a few simple platforms can collect and carry small amounts of data until finding a suitable ground station to deliver it to the end user.

If we take a closer look, space-observation missions have followed the same data-handling principle. Whenever an image, radar, or measurement is made in orbit, data is stored and kept in local memory until it can be downloaded to ground. In particular, some distributed missions such as PRISMA and GOM-X4 have gone a little further and used intersatellite links to prepare the data in one of several satellites to then aggregate and conveniently download it from a single spot. Such practices can indeed be extended to larger constellations.

Whether for data acquisition or IoT relay in space, continuous coverage and low latency can be adapted to the requirements and possibilities of a system with a different networking paradigm.

In general, one can say that low-latency space systems such as Iridium are an attempt to network the space-terrestrial frontier with the telephone-like synchronous communication paradigm typically characteristic of terrestrial networks. This is consistent with the product these networks provide: internet service to customers without access to infrastructure on ground.

In the same sense, one can say that high-latency systems such as Kepler, LacunaSpace and some distributed observation missions are an attempt to network the space-terrestrial frontier with an asynchronous communication paradigm typically seen in deep space flight missions. In particular, they need to cope with high latency and sporadic data transmissions.

High delays (because of the signal propagation delay) and disruptions (because of plane rotations) are the rule and not the exception in deep space missions. For example, a signal from Mars can be transmitted when the spacecraft is on the visible side of the planet, taking from three to 23 minutes to reach Earth. This leaves any kind of real-time voice or internet data exchange off the table. However, future astronauts and spacecraft will want to stay connected to Earth and will need local communications as they begin exploration of Mars and the other planets.

High-latency systems such as those planned by LacunaSpace, Kepler and some distributed observation missions are an attempt to network the space-terrestrial frontier with an asynchronous communication paradigm typically seen in deep space missions. Credit: LacunaSpace
High-latency systems such as those planned by LacunaSpace, Kepler and some distributed observation missions are an attempt to network the space-terrestrial frontier with an asynchronous communication paradigm typically seen in deep space missions. Credit: LacunaSpace

Since the standard internet protocols such as TCP are based on continuous and instantaneous end-to-end data exchange (i.e., client/server chattiness), a different and more generic networking framework had to be created to operate in the interplanetary environment. It was termed Delay Tolerant Networking (DTN). In contrast with the internet, in DTN there is no expectation of an instantaneous reply from the destination, and nodes can keep data in local storage until a proper link to forward the data becomes available. (Note that applications using this type of service have to be delay tolerant as well.)

Interestingly, these principles also apply to challenged networks on Earth and to observation and IoT data-relay missions in near-Earth orbit. To date, in the absence of clear communications standards built on these principles, most of those asynchronous terrestrial applications have been based on proprietary and ad-hoc solutions; continuously “reinventing the wheel” has resulted in higher development costs, higher error rates, and lack of interoperability.

This may be changing. As discussed in our book “Delay-Tolerant Satellite Networks,” DTN protocols and algorithms are receiving significant attention from industry and academe and are currently on standardization tracks in the Internet Engineering Task Force and the Consultative Committee for Space Data Systems. Tested, non-proprietary solutions to the problems of networking in space, including near-Earth orbit, are at hand.

Does this mean that synchronous satellite constellations need to modify their protocols? Not at all. Existing synchronous LEO satellite systems will succeed brilliantly using internet protocols; internet service is the product they are providing. Does this mean the space-terrestrial frontier will be forever split into two domains? Again, not at all. As likewise discussed in our book, the synchronous networking paradigm can be viewed as a special case of asynchronous communications, where the delay and storage time in nodes are practically zero.

That is, these two models are wholly compatible and complementary. Internet applications need only the internet infrastructure. Delay-tolerant applications need DTN, which operates over highly challenging disrupted and/ or delayed links where necessary, but which also operates over the internet (as, itself, an internet application) wherever internet infrastructure is available. Adapting delay-tolerant internet applications — e.g. social media applications — to interact with peer entities over challenged paths is straightforward.

In the long run, with inclusion of the DTN protocols in network routers and end devices, ground nodes, near-Earth constellations, and the deep-space interplanetary network will be able to seamlessly interoperate, finally unifying network communications at the space-terrestrial frontier.


Juan Fraire is an assistant researcher at the Argentinian Scientific and Technical Research Council, associate professor at the Astronomy, Mathematics and Physics Faculty, and consultant at the Argentinian Space Agency, CONAE. Jorge Finochietto is a professor in the School of Engineering at the Universidad Nacional de Cordoba. Scott Burleigh is a principal engineer at NASA Jet Propulsion Laboratory. Their book, “Delay-Tolerant Satellite Networks,” was published in January by Artech House.