High-Performance Computing
The field of high-performance computing uses massively parallel and distributed computing approaches in combination with accelerators to solve large-scale problems with the help of supercomputers. High-performance computing is used for running large weather or climate models, as well as pre-processing dataset, training machine learning models, and enabling users to query large language models - a process called inference.
The mission of the Department of High-Performance Computing (HPC) is is to enable high-performance computations in both traditional computational sciences and the resurgent field of machine learning and artificial intelligence.
Our work requires detailed knowledge at all levels of operations, be it on the level of communication with the so-called interconnects, the special programming of accelerators, the efficient design of workload distribution for massively parallel execution or an effective monitoring to understand the performance and potential for improving job executions.
The HPC team is an enabler for various types of research, and helps researchers who work in different branches of science adopt the latest computing platforms to address their challenging scientific questions in the best-possible way. Our team aims to provide a user-friendly access to large computing resources, and finds ways of using modern hardware platforms as efficiently as possible.

Focus areas
Dealing with supercomputers can be extremely challenging when trying to maximise efficiency. We thus have to make it as easy as possible for computational scientists and other users of supercomputers to use the available resources in the best possible way. That can imply the introduction of seemingly small improvements which result in huge impact.
Dr. -Ing. Thomas Roehr (Head of Department)
Lower the user threshold
High-Performance Computing team members investigate the best ways of using and programming the latest available computing hardware, to cast this either into best-practices or better into user-friendly software tools. Some of these are in the form of automated code generators – that is, a computer program that implements software code itself based on some inputs from the users.
Ideally, a scientist operating a high-performance computing system does not need to delve into the details and specificity of individual hardware devices. Instead, the scientist can rely on an intermediate layer that takes care of translating requirements into specialised code or configuration, so that experiments can run efficiently on the latest hardware.
For that purpose, the HPC department is concerned with learning about and from users' needs, and cast that knowledge into practical improvement for a more effective usage of our computing systems.
Optimize computational performance
An overall optimal performance is the result of optimization at various levels.
Hence, we develop software tools and algorithms to maximise the potential of modern computing platforms. We achieve this by devising hardware-adapted or even hardware-inspired numerical algorithms, creating performance-enhancing data structures, and investigating software, middleware and hardware optimization techniques for the latest hardware platforms.
Efficiency in real-world applications
Achieving optimal results may require different approaches, depending on the application domain. While we can offer some way for general optimization, our team also works side-by-side with domain scientists to ensure a smooth transition from small-scale academic tests to full-scale applications which will effectively use the hardware resources on modern supercomputers.
We thus enable users to understand their own workload requirements better, so that we provide the best suited computing resources to help solve their problem
Exascale computing preparation
The Department of HPC operates a national e-infrastructure named eX3 (Experimental Infrastructure for Exploration of Exascale Computing), an extremely heterogeneous cluster of cutting-edge hardware components. Its purpose is to provide Norwegian researchers and their international collaborators with a unique testbed to experiment with various hardware technologies and the related software tools. This infrastructure is essential to understanding how to use the technologies effectively and adopt these in their research fields.
In this way, the Norwegian research community will be better prepared for the upcoming era of exascale computing and effectively and efficiently use available resources.









