Feature: Preparing Norway for the next generation supercomputers

Feature: Preparing Norway for the next generation supercomputers

Published:

You may not know what supercomputers are, but you use them every day.

Photo: Henrique Kugler

Supercomputers

Whenever you check the weather forecast, for instance, you are indirectly using one. The cybersecurity measures that keep our bank accounts safe; the development of more reliable technologies for cancer diagnosis; and even the possibility of tweaking the design of an aircraft so it can cruise the skies faster while burning less fuel… These are a few examples of what scientists can achieve when aided by supercomputers. Many of today’s technological advancements would be impossible without the processing power of such impressive machines.

There is one problem, though. Even the fastest supercomputers available to date are still not powerful enough to tackle the increasing complexity of the scientific endeavours ahead of us. That’s why Norwegian researchers are excited to share the good news: at Simula Research Laboratory, in Oslo, a team of experts is building an experimental test bed that will enable them to understand and evaluate the most promising technologies that are likely to be used in the next generation of supercomputers. Nicknamed eX3, the Experimental Infrastructure for Exploration of Exascale Computing is a cutting-edge initiative that has been attracting attention in Europe and beyond.

“We believe the eX3 infrastructure promises truly groundbreaking research results,” says Kyrre Lekve, Simula’s Deputy Managing Director. “With this new infrastructure, we can experiment with hardware and software, and even the man-machine interaction, in a way that has not been done in Norway before, and rarely in the rest of the world,” he adds.

“It’s a very unusual set-up”, says Are Magnus Bruaset, Simula’s Research Director for Software Engineering and HPC. “We can define the eX3 as a collection of different technologies connected together in a way that contributes to the development and understanding of so-called exascale computing.” To put it informally, it’s a playground where researchers in the field of High-Performance Computing (HPC) can test the most promising technology trends that are likely to be dominant in coming years and decades.

“No one knows for sure what future supercomputers will look like; whatever comes next, we want to be ready for it,” comments Xing Cai, an expert in scientific computing who currently leads Simula’s HPC Department and has special responsibility for the eX3 research. “A recent user satisfaction survey demonstrates the need for an infrastructure like eX3 in Norway, and also shows that the users are very satisfied with what we offer,” Bruaset adds.

Professors Are Magnus Bruaset (left) and Research Professor Xing Cai (right). Photo: Henrique Kugler.

But what is exascale computing? The concept may sound a little intimidating. Traditionally, we measure the performance of computing systems in terms of FLOPS, which stands for “floating-point operations per second”. Don’t worry if you don’t know what that means. In short: the higher this number, the faster the computer. The most powerful supercomputers in existence nowadays operate at the petascale. In other words, they are capable of performing, over the course of a single second, more than 1015 calculations – that is, something around 1 billiard FLOPS (for the sake of comparison, that’s more than a million times the computational power of an average laptop). The next step in this daunting scientific quest is to achieve exascale – that is, we aim for the development of machines that can perform no less than 1 trillion calculations per second, or 1018. Just imagine the number 1, followed by 18 zeros: 1,000,000,000,000,000,000. Achieving such computing capacity is certainly not a trivial challenge.

Trending topics

To reach the exascale paradigm, there are at least three main trends. The first one is the idea of squeezing more processing power into the same central processing units (CPUs) that we have been using for decades. To illustrate this trend in a simple analogy: if supercomputers were human beings, it would be like enhancing the thinking capacity of our brains by growing a large number of additional brain cells.

The second trend involves reengineering hardware elements that were not originally constructed to be main processing units. Back to the same analogy, it would be like reprogramming another part of our body – say, the spinal cord – to carry out functions assigned to the brain. Computer designers have used, for example, a piece of hardware called a graphics processing unit (GPU). Also known as a graphics card, this component was not originally developed to do the ‘thinking’ behind the scenes, as a CPU normally does. But researchers have shown that, in fact, the core of a graphics card can quite efficiently act as a processing unit as well. Bringing GPUs to centre stage, turning them into protagonists in the universe of supercomputing, is something that has kept scientists busy in recent years.

A third promising trend entails approaching the challenge from an entirely new perspective: designing new processing units from scratch, tailored for specific purposes. Using our analogy, it would be as if we could create an upgraded version of our brain, that is, a brand new organ that could handle certain types of information and assessments in a completely novel manner. Right now, there’s a whole new ecosystem of companies and startups invested in these new technologies, and the eX3 consortium is in close dialog with several of these.

The question is: which of these trends will be the predominant force behind the supercomputers of tomorrow? “It’s hard to say,” answers Xing. “That’s why we should prepare for different scenarios; and that’s what the eX3 infrastructure allows us to do.” After all, concentrating research efforts on a single trend would be the equivalent of putting all our eggs in one basket. When the technology for exascale computing is finally out there, probably coming from the United States or China, the Norwegian scientific community won’t be caught by surprise.

The geopolitics of supercomputing

Europe lags behind. “Industry in the EU currently consumes over 33% of supercomputing resources worldwide, but supplies only 5% of them,” according to the European Commission. A 2018 report stated that “the world's best-performing supercomputers are not located in Europe, and those that are in Europe depend on non-European technology; the European HPC technology supply chain is still weak and the integration of European technologies into operational HPC machines remains insignificant.”

So far, the leaders in the realm of supercomputing are the United States and China, and, to some extent, Japan. “For the European community, this may be seen as a potential threat in terms of scientific and industrial development,” reasons Are Magnus.

Indeed, European nations are already trying to catch up. They officially launched the EuroHPC Joint Undertaking in 2018 – an initiative that involves 31 countries, including Norway, and is expected to make world-class supercomputing flourish in European territory. One of EuroHPC’s next steps is to install a new supercomputer in Finland by the end of 2020, a system that should rank among the top 5 in the world. In the meantime, the eX3 platform shall be a key resource to ensure these efforts are made in the right direction.

Greedy for power

Why do we need so much computing power anyway? Are Magnus gives us a didactic explanation. Any phenomenon governed by the laws of physics can be translated into mathematical equations: magic happens when these equations are further translated into simpler numerical relations that can be converted into computational processes. By doing so, scientists can create computer models and simulations capable of digitally replicating complex physical processes – opening up countless possibilities of speculation, analysis and even prediction.

The weather forecast is the clearest example in our daily lives. Simulating atmospheric conditions requires large amounts of data as well as sheer computational power. They must give us results that are not only accurate, but also fast. Climate change is another crucial example. Nowadays, climatologists are heavily reliant on sophisticated computing technologies. “If we want more accuracy in our climate models, the computational power we have today is definitely not enough,” remarks Xing.

The same is true for research in the field of medicine, which aims for faster and better diagnostics as well as patient-specific treatments. At Simula, the eX3 infrastructure has allowed researchers to considerably speed up the automated analysis of medical imagery. As a result, they are now able to design technologies that will lead to faster and more accurate colonoscopy examinations – which may be key to preventing cases of colorectal cancer. The eX3 power has also facilitated the development of state-of-the-art simulations of the human heart’s electrical activity.

But it’s not only about scientific research. Think of mobile phones or other gadgets our digital economy is becoming increasingly dependent on. From financial transactions to geolocation services, there is a lot of information that needs to be processed all the time in virtually all industries and sectors. How much data are we actually sharing, consuming and generating every single second? Processing such vast quantities of information requires computing capabilities that are still beyond our reach.

In fact, Xing argues that even the exascale supercomputers will still be insufficient. There’s no end in sight to this search for ever-increasing computing power. In the eX3 context, this is the rationale for a new proposal to the Research Council of Norway, due in November this year, that aims to provide Norwegian HPC research access to the technology frontier for several years to come. 

This feature article was commissioned by Simula and written by Henrique Kugler.