Sam Hatfield, PhD student at the University of Oxford, supervised by Tim Palmer at Oxford and Peter Düben at ECMWF.
Through my PhD studies in atmospheric physics at the University of Oxford I developed an interest in supercomputing. I would run experiments with the OpenIFS variant of ECMWF’s global weather forecasting model, IFS, on my department’s local computing cluster. It almost seemed like magic – type a few commands, wait an hour, and then look at the simulated world that you created in all its detail. But while this simulation system was very convenient, it also obscured the details of the underlying computing system. I wanted to know how supercomputers were made, and from that how I, as an Earth-System modeller, might get the most out of them. So I talked with fellow students Josh Dorrington and Milan Klöwer and we decided to build our own.
What does a supercomputer look like?
A supercomputer is really nothing more than many smaller computers, known as “nodes”, connected together with very fast networking cables, known collectively as the “interconnect”. The computers work together in parallel to solve a given problem, such as weather forecasting.
At ECMWF, there are two Cray XC-40 systems with around 7,000 nodes each. However, each node actually contains tens of computing cores, so really there are around 100,000 computing units to contend with. In the future, it is likely that supercomputers will become even more parallelised, and this is something that Earth-System modellers must keep in mind when designing future systems.
How did we build a supercomputer?
We wanted to build a portable, self-contained and easy-to-understand system, both so we could spend minimal effort in constructing it but also so we could use it for outreach, to teach the general public about numerical weather prediction. For the computing nodes, we chose the Raspberry Pi. This is a credit card sized computer that costs around GBP 30 and runs a Linux operating system. We purchased four Raspberry Pis and the necessary networking equipment with financial support from the Royal Society, secured by my supervisor at ECMWF, Peter Düben. The entire system came to around GBP 350. We then connected all of the computers together, installed the necessary software, set up a networked file system and it was ready to go!
The Raspberry PiPS (Raspberry Pi Planet Simulator) system.
For the weather model, we chose OpenIFS as we knew that it had good user support through ECMWF and also efficient multi-node parallelisation. After brainstorming for some time, we eventually settled on a name for the system: Raspberry PiPS (Raspberry Pi Planet Simulator). There was some debate as to whether raspberries actually have pips, but the name sounded catchy so we stuck with it.
What can it do?
We were very impressed by the performance of the system. Our goal was to run simulations at T42 resolution - around 310 km grid-spacing. In fact, with help from Glenn Carver at ECMWF, we were able to run up to T95, or around 130 km, resolution. Furthermore, the system could simulate one hour of the global weather in around 3 seconds, which meant we could display the output from the model in real time.
How does it compare with historical systems at ECMWF?
Supercomputers have evolved considerably since ECMWF was founded, so it is difficult to compare Raspberry PiPS directly with previous systems. However, in terms of performance Raspberry PiPS is about on a par with the Cray Y-MP, installed in 1989. Both systems have a peak performance of a few billion FLOP/s (floating-point operations per second). However, whereas the Cray Y-MP consumed about 300,000 W of power, Raspberry PiPS consumes around 50 W, 6,000 times lower. For this remarkable development we can thank Moore’s Law – that computer hardware manufacturers are able to fit twice as many transistors on a chip every 2 years or so.
What have we used it for?
We mounted Raspberry PiPS on a globe, built and painstakingly hand-painted by Milan. We decided to place the individual Raspberry Pi nodes around the globe to convey the notion of domain decomposition (that each node is responsible for computing the weather in a different part of the globe). We then took Raspberry PiPS to the ATOM Festival Science Market in Abingdon to show to the general public. We found the attendees to be very curious, and we were impressed by how quickly they were able to grasp the basic ideas behind numerical weather prediction.
Josh Dorrington (left) and Sam Hatfield (right) presenting Raspberry PiPS at the ATOM Festival Science Market in Abingdon, 23 March 2019.
It is a testament to the advances in computing that we are able to build a tabletop system on a very small budget that can compete with the world’s fastest supercomputers only 30 years ago. We found Raspberry Pis to be more than ample for making a realistic simulation of the atmosphere and we hope that our results will encourage others to consider them as an outreach tool for other aspects of Earth-System simulation in the future.