High Performance Computing

High-Performance Computing

The mission of the Department of High-Performance Computing (HPC) is to enable high performance of computations in both traditional computational sciences and the resurgent field of machine learning and artificial intelligence.

The HPC team helps researchers who work in different branches of science adopt the latest computing platforms to address their challenging scientific questions. The team applies user-friendly programming and finds ways of using modern hardware platforms as efficiently as possible.

Department head

Xing Cai

Xing Cai

ProfessorChief Research ScientistHead of department

Focus areas

We are privileged to be able to work as a bridge between computational scientists and intimidating, fast-changing supercomputers.

Prof. Xing Cai, head of the HPC department

Lower the user threshold

High-Performance Computing team members investigate the best ways of programming the latest computing hardware, then, apply these methods as user-friendly software tools. Some of these are in the form of automated code generators – that is, a computer program that implements software code itself based on some inputs from the users. This allows scientists without training in advanced programming to easily translate their computational tasks into software code that can run efficiently on the latest hardware.

Optimize computational performance

During the development of software tools, the team experiments with strategies to maximise the potential of modern computing platforms. It is achieved by devising hardware-adapted or even hardware-inspired numerical algorithms, creating performance-enhancing data structures, and investigating software, middleware and hardware optimization techniques for the latest hardware platforms.

Efficiency in real-world applications

The team works side by side with domain scientists to ensure a smooth transition from small-scale academic tests to full-scale applications that will effectively use the hardware resources on modern supercomputers.

Exascale computing preparation

The Department of HPC operates a national e-infrastructure named eX3 (Experimental Infrastructure for Exploration of Exascale Computing), an extremely heterogeneous cluster of cutting-edge hardware components. Its purpose is to provide Norwegian researchers and their international collaborators with a unique testbed to experiment with various hardware technologies and the related software tools, for the purpose of adopting these in their research fields. In this way, the Norwegian research community will be better prepared for the upcoming era of exascale computing.

Learn more about eX3 →

Read Feature: Preparing Norway for the next generation supercomputers

Feature: Preparing Norway for the next generation supercomputers

Published: 16.6.2020

Key partners

People in HPC

Hanna Borgli

Hanna Borgli

Affiliated PhD student

Are Magnus Bruaset

Are Magnus Bruaset

ProfessorResearch Director

Luk Burchard

Luk Burchard

PhD student

Xing Cai

Xing Cai

ProfessorChief Research ScientistHead of department

Anne Fouilloux

Anne Fouilloux

Senior Research Engineer

Ernst Gunnar Gran

Ernst Gunnar Gran

Adjunct Senior Research Scientist

Joachim Berdal Haga

Joachim Berdal Haga

Chief Research Engineer

Masoud Hemmatpour

Masoud Hemmatpour

Adjunct Research Scientist

Johannes Langguth

Johannes Langguth

Senior Research Scientist

Tore Heide Larsen

Tore Heide Larsen

Chief Research Engineer

Asep Maulana

Asep Maulana

Postdoctoral Fellow

Thomas Roehr

Thomas Roehr

Senior Research Engineer

Tor Skeie

Tor Skeie

ProfessorAdjunct Chief Research Scientist

Håkon Kvale Stensland

Håkon Kvale Stensland

Senior Research Scientist

Erik Sæternes

Erik Sæternes

PhD student

Andreas Thune

Andreas Thune

PhD student

James D Trotter

James D Trotter

Postdoctoral Fellow

Publications

Read Targeting performance and user-friendliness: GPU-accelerated finite element computation with automated code generation in FEniCS

J. D. Trotter, J. Langguth and X. Cai

Targeting performance and user-friendliness: GPU-accelerated finite element computation with automated code generation in FEniCS

Parallel Computing

Read Spillifisering for økt engasjement, fleksibilitet og alternativt læringsløp

O. Mirmotahari, R. N. Islam, Y. Berg, C. Foss and H. K. Stensland

Spillifisering for økt engasjement, fleksibilitet og alternativt læringsløp

Norwegian Conference for Education and Didactics in IT subjects (UDIT)

Read Enabling unstructured-mesh computation on massively tiled AI processors: An example of accelerating in silico cardiac simulation

L. Burchard, K. G. Hustad, J. Langguth and X. Cai

Enabling unstructured-mesh computation on massively tiled AI processors: An example of accelerating in silico cardiac simulation

Frontiers in Physics

Read Detailed Modeling of Heterogeneous and Contention-Constrained Point-to-Point MPI Communication

A. Thune, S. Reinemo, T. Skeie and X. Cai

Detailed Modeling of Heterogeneous and Contention-Constrained Point-to-Point MPI Communication

IEEE Transactions on Parallel and Distributed Systems

Read Bringing Order to Sparsity: A Sparse Matrix Reordering Study on Multicore CPUs

J. D. Trotter, S. Ekmekçibaşı, J. Langguth, T. Torun, E. Düzakın, A. Ilic and D. Unat

Bringing Order to Sparsity: A Sparse Matrix Reordering Study on Multicore CPUs

SC '23: International Conference for High Performance Computing, Networking, Storage and Analysis

Read SmartIO: Device sharing and memory disaggregation in PCIe clusters using non-transparent bridging

J. Markussen

SmartIO: Device sharing and memory disaggregation in PCIe clusters using non-transparent bridging

The University of Oslo

Read On memory traffic and optimisations for low-order finite element assembly algorithms on multi-core CPUs

J. D. Trotter, X. Cai and S. W. Funke

On memory traffic and optimisations for low-order finite element assembly algorithms on multi-core CPUs

ACM Transactions on Mathematical Software

Read ML Accelerator Hardware: A Model for Parallel Sparse Computations?

J. Langguth and L. Burchard

ML Accelerator Hardware: A Model for Parallel Sparse Computations?

Siam ACDA, Aussois, France

Read ML Accelerator Hardware: A Model for Parallel Sparse Computations?

J. Langguth and L. Burchard

ML Accelerator Hardware: A Model for Parallel Sparse Computations?

University of Vienna, Austria

Read Implementing Spatio-Temporal Graph Convolutional Networks on Graphcore IPUs

J. Moe, K. Pogorelov, D. T. Schroeder and J. Langguth

Implementing Spatio-Temporal Graph Convolutional Networks on Graphcore IPUs

2022 IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW)

Read Impacts of Covid-19 on Norwegian salmon exports: A firm-level analysis

H. Straume, F. Asche, A. Oglend, E. B. Abrahamsen, A. M. Birkenbach, J. Langguth, G. Lanquepin and K. H. Roll

Impacts of Covid-19 on Norwegian salmon exports: A firm-level analysis

Aquaculture

Read Host Bypassing: Let your GPU speak Ethernet

R. Kundel, L. Anderweit, J. Markussen, C. Griwodz, O. Abboud, B. Becker and T. Meuser

Host Bypassing: Let your GPU speak Ethernet

IEEE 8th International Conference on Network Softwarization (NetSoft)