There are many challenges to building human settlements on Mars. The most
launch opportunity windows
only arise every
2.2 years when Mars is closest to Earth. Best journey times are 3-6 months. The
atmosphere is primarily CO2, and it is very cold.
Once we have figured out how to get there and how to reliably support human life
(some are aiming for this decade),
questions of quality of life become relevant. This includes producing water and
food, staying fit and healthy, socialising and entertainment. Normal things
humans like to do.
3-6monthsevery 2.2 years
3-6monthsevery 2.2 years
One of the resources we have come to take for granted is access to the internet.
Whether to look up information, send email or watch a video, internet access is
now fundamental to modern life. However, all of these services are based on
Earth. The internet was designed based on a number of assumptions that will no
longer be true if we want to offer the same experience to citizens of Mars.
This article will examine those assumptions, discuss the challenges and consider
possible solutions to setting up the internet on Mars.
The internet is a large group of interconnected networks. Each network has one
or many devices connected to it, each with its own IP address. When you access a
service hosted on the internet, such as a website, your local computer uses
several protocols to figure out how to communicate with the destination, make a
request to it, and get the response back to you. If the destination is on the
same network as your device, this may involve a connection over a single
network, probably via a network switch or router. On the public internet, this
usually involves multiple switches, routers and networks owned by many different
On Earth, accessing websites primarily involves Transmission Control Protocol
(TCP), Internet Protocol (IP) and Hypertext Transfer Protocol (HTTP). TCP/IP
deals with connecting your device and the transmission of data to/from the
destination. When you make a request to a website, TCP/IP deals with opening a
connection, routing the data, and ensuring the data is transmitted correctly.
Before any data is sent, the protocol must open a connection with the
destination. Known as the SYN, SYN-ACK, ACK
this establishes a connection between two devices. Once open, further data can
be transmitted. TCP provides guarantees that data will arrive in order and any
lost data will be re-transmitted. TCP assumes it will receive a rapid response
and where there is data loss, congestion algorithms throttle the network
This is a simplified description, but is sufficient to understand the challenges
with establishing the internet on Mars.
The problem with connecting Mars & Earth
Whilst the average
distance between Earth and Mars
is 229 million km / 142 million miles,
it can range from 54.6
million km / 33.9 million miles to 401 million km / 249 million miles. Assuming
that a link between the planets can transmit data at the speed of light, direct
transmission of a single data packet could therefore take between 3-22 minutes
to reach its destination - ✍︎
see our calculations.
If a person on Mars tried to access a service located on Earth, not only would
they need to wait for the requested data to travel from Earth to Mars, but just
completing the 3-step TCP handshake to establish the connection would take
between 9-66 minutes - ✍︎
see our calculations.
Further, TCP’s built in congestion control and packet loss detection is not
designed for such long response times. Packets must be acknowledged,
by default in less than 1 second,
otherwise they will be re-transmitted. For Earth/Mars communication, this is
Earth — Mars TCP 3-Way Handshake
Earth — Mars TCP 3-Way Handshake
9-66 mins3 × 3-22 mins
Separate Earth and Mars internet
The obvious solution to the problem of long transmission times from Mars to
Earth is to avoid such long-distance transmissions in the first place! This
would mean setting up “the internet” from scratch on Mars and either replicating
the same services or having Mars-specific services.
The initial Mars missions will be about exploration and setting up the basic
requirements for life, but over time it is reasonable to expect that people
living on Mars would want to access the same internet services in the same way
they are used to on Earth. Whether that is email, search, video streaming, or
gaming, we have an expectation of what we can access online.
Deploying these services on Mars could follow the same approach as on Earth:
installing servers in at least one data center and using the standard Earth
internet protocols to bootstrap the first network. The vast scale of Earth
networks run by the likes of Google and Amazon may come eventually, but the
demand on Mars internet services would grow gradually with the number of
But what about all the content? It is easy to set up a few network switches and
servers, install Nginx and PostgreSQL then launch some websites, but this would
be like going back to the early 90s on Earth. Very few websites. You could only
email other people on Mars. And of course no videos on MarsTube.
Blockchain, crypto & eventual consistency
Users of relational databases like MySQL and PostgreSQL tend to make use of them
for their ACID properties. Atomicity and
Consistency are important for use cases like finance, but replicating
transactions over high-latency connectivity would be a challenge.
Dealing with transaction latency could be solved by blockchain currencies like
Bitcoin and Ethereum. The design of the blockchain ledger is already distributed
and it takes time for transactions to be confirmed by sufficient members of the
network. This is supposed to be every ~10 minutes but
recent confirmation times
averaged 100 minutes until mid 2021 when they started spiking sometimes spiking
to multiple hours and even half a day.
If real-time transactions are important,
the scalability of blockchain transaction rates
is a problem. However, the existing banking system is used to experiencing
several days of settlement delay with cheques, transfers and other types of
transaction. If settlement times of several hours are acceptable then deploying
cryptocurrency nodes in a distributed network that covers both Earth and Mars
could be how a Mars-based monetary system is established. The first steps have
already been taken with
Ethereum nodes being launched into space,
although so far just as a secure location for wallet storage.
These types of real-time transactions tend only be necessary in a small number
of situations. Watching videos, posting comments, reading blogs, sending emails,
etc are all pretty delay-tolerant. Eventual consistency where the data syncs up
eventually may be sufficient.
Databases such as CouchDB were designed with
specific offline use cases in mind. Distributed
file storage such as IPFS could also provide a solution
because it offers not just eventually consistent replication but also Peer to
Peer communication - once a single IPFS user on Mars has downloaded a copy of
the file, it can be served to other users locally. But how do the files get
there in the first place?
Near Earth NetworkSpace NetworkDeep Space NetworkOther orbiting
LEO <2K kmGEO ~36K km
LEO <2K kmGEO ~36K km
Near Earth NetworkSpace NetworkDeep Space NetworkOther orbiting
NASA already runs several communication networks in space:
Near Space Network: A network of
ground-based antennas for communication with satellites in orbit around the
Space Network: A network of ground
stations that communicate with Earth orbiting satellites for tracking and
relay communications, such as for the International Space Station and Hubble
Deep Space Network:
Consists of an array of Earth-based radio antennas (in California, Spain,
Australia) and several orbiting satellites to allow communication with
Back in 2007, a new
Delay-Tolerant Networking Architecture
was proposed by the IETF specifically designed for the Interplanetary Internet.
It assumes such networks may be “occasionally connected” and have frequent
partitions, with deep-space the primary but not exclusive use case.
The proposed DTN architecture sits between the transport and application layers,
and introduces the concept of a “bundle layer”. This new layer is formed of a
number of relay nodes which have persistent storage, and are responsible for the
reliable communication of data between nodes. Unlike TCP/IP, which requires
successful acknowledgment to the source by the destination, acknowledgements in
the DTN are optional. The focus of DTN is the storage-and-forwarding of these
bundles, where the storage can happen for long periods of time, rather than the
routing of much smaller packets as is typical in IP networks.
DTN also differentiates from TCP/IP by increasing the length of messages to make
the most of connectivity when it becomes available, and by introducing source
based relative service classes so that more urgent bundles can be prioritized.
Different delivery options can be specified by the application using DTN
depending on whether the sender requires bundle delivery reports or more
reliable transfers. The idea is that a bundle is a more useful unit of data
which can be scheduled for transmission with a better understanding of the
application it is supporting.
The DTN RFC uses the postal service as an analogy, where the sender can choose
different delivery options depending on the priority. For example, the sender
might post two items into a post box on the street, where the items are held for
a period of time before being collected. Once in the system, their priority
determines how quickly they are routed to the next step, and ultimately the
destination. The DTN delivery option is similar to postal tracking options,
where there is flexibility to choose between no tracking all the way through to
high-reliability and acknowledgement of successful delivery.
Congestion control remains a challenge for these types of network protocols.
Data volumes are ever increasing, so very large persistent data storage will be
required to maintain capacity to store bundles, and flow control is necessary to
manage this properly.
As described in the RFC:
a DTN node receiving a bundle using TCP/IP might intentionally
slow down its receiving rate by performing read operations less frequently in
order to reduce its offered load. This is possible because TCP provides its own
flow control, so reducing the application data consumption rate could
effectively implement a form of hop-by-hop flow control. Unfortunately, it may
also lead to head-of-line blocking issues, depending on the nature of bundle
multiplexing within a TCP connection.
Careful consideration of network security will be important because the obvious
attack vector for DTN is flooding the system with very large bundles. Access
control will be needed, and perhaps agreement as to the type of applications
that may be permitted to use the network. For example, video streaming might not
be an appropriate use - a single user streaming at high-resolution could
saturate the capacity - but many users browsing mostly-text would be allowed.
For this reason, DTN features strong security as part of the protocol.
The latest version of the bundle protocol defined in
RFC 5050 -
Bundle Protocol 7 - is
in draft, and has reference implementations in various languages including
C, Go and
Very long round trip times
Applications will also need to be refactored so they become delay-tolerant, and
networks may need to become more application-aware so that routing can happen
based on the type of data being requested.
However, even with DTN storing and forwarding data perfectly, the application
will still experience very long round trip times. This could be due to the
simple physics of speed of light transmission over long distances, but it could
also be because the capacity at either end (or both) is fully utilized. Such
limits already exist with the Deep Space Network where resources are limited,
and scheduled far into the future.
This is where the
Licklider Transmission Protocol (LTP)
comes into play.
A reference implementation already
exists which shows how LTP operates over the data link layer (UDP during
development or IP in live environments). LTP has several features, such as no
handshake, options for reliable and unreliable transmission, and unidirectional
communication to avoid contention by waiting for a response.
The physical implementation of DTN and LTP is through the
Interplanetary Overlay Network.
This might be a series of relay nodes operating in the physical space between
the planets, with relay antennas on both planets. Just like how countries and
continents on Earth are connected via under-sea cables, the planets would be
connected through nodes positioned in space. The
Mars Telecommunications Orbiter
was one such node due to come online in 2010, but it was cancelled in 2005.
Node bundle storage
The sheer distance means that communication delays are unavoidable, so it will
be necessary to use these types of protocols which at least allow for
communication between planets over several hours. However, where time is not
especially important, local storage of data with periodic refreshes could be an
Transporting large files to Mars
Rather than connecting the planets directly, an alternative approach would be to
periodically package up the content generated on Earth and physically ship it on
the regular missions planned by the likes of SpaceX. Data would be generated on
both planets, then asynchronously synced up later.
This might look something like the
AWS Snowmobile which allows transferring
up to 100PB of data inside a shipping container. The list pricing of $0.005 / GB
/ month means a 6 month journey would cost $3m USD just in storage fees. At
Elon Musk’s 2004 aspirational goal
of $500 per pound of payload delivered to orbit, the 68,000 pound Snowmobile
container would cost $34m to launch. However, this assumes that all rocket
stages are recoverable.
Current specs for the
Falcon Heavy suggest a maximum
Mars launch capability of 16,800 kg / 37,040 lb with all stages expendable. This
might mean using AWS Snowball devices
instead, which in storage optimized mode can transport up to 80TB each. Such a
launch would cost more than the $90m list price where stages can be recovered.
No doubt AWS would also charge for the time taken to fill them on Earth, wait
for the launch window, unload the data on Mars and then ship them back
(presumably now filled with content generated on Mars).
Back in 2015,
YouTube was said to generate
around 35PB of data per year, and had at least 400PB of total storage
requirements. If that had just continued to grow at the same rate, the total
storage for YouTube would now have reached over 600PB.
At 22.5Kg each,
a single Falcon Heavy could carry 746 AWS Snowball devices, or about 60PB.
Transporting all of YouTube would therefore require at least 10 Falcon Heavy
flights, at a launch cost of almost $1bn. That doesn’t include the cost of the
Snowball devices themselves.
How will we access the internet on Mars itself? Will the Mars pioneers follow
the same approach of digging up the surface of the planet to bury cables? Maybe.
There are no oceans so we can more efficiently connect regions with redundant
connectivity, compared to
all those sub-sea connections
which regularly break
and are difficult to repair. Or perhaps they will string cables up on poles like
has been common in Asian cities
Mobile connections are much more common in regions on Earth where wires are too
expensive or difficult to deploy. Fixed-line networks have the benefit of
low-latency, but in rural areas, and regions with less developed infrastructure,
satellite connectivity could be preferred. Starlink
gives us a glimpse of an alternative approach: launch thousands of low-Mars
orbit satellites and give customers hardware receivers for high-speed
connectivity anywhere on the planet.
Indeed, whether it is just a hidden marketing Easter-egg or not, the current
Starlink Terms of Service
already cover this scenario:
For Services provided to, on, or in orbit around the planet Earth
or the Moon, these Terms and any disputes between us arising out of or related
to these Terms, including disputes regarding arbitrability (“Disputes”) will be
governed by and construed in accordance with the laws of the State of California
in the United States. For Services provided on Mars, or in transit to Mars via
Starship or other spacecraft, the parties recognize Mars as a free planet and
that no Earth-based government has authority or sovereignty over Martian
activities. Accordingly, Disputes will be settled through self-governing
principles, established in good faith, at the time of Martian settlement.
David Mytton is Co-founder & CEO of Console. In 2009, he founded and was CEO of Server Density, a SaaS cloud monitoring startup acquired in 2018 by edge compute and cyber security company, StackPath. He is also researching sustainable computing in the Department of Engineering Science at the University of Oxford, and has been a developer for 15+ years.
Console is the place developers go to find the best
tools. Each week, our weekly newsletter picks out the most interesting tools and new releases. We keep
track of everything - dev tools, devops, cloud, and APIs - so you don’t have to.
Discover the best tools for developers
Our free weekly email features a digest of the best new tools and beta releases for developers. See the latest email.