Native MPI implementation for PCI Express

Message Passing Interface (MPI) is a standardized and portable message-passing system designed by researchers from academia and industry to function on a wide variety of parallel computers.
Master

MPI is used in several small and large clusters and supercomputers and supports both multiprocessor environments and heterogeneous architectures with GPUs and other accelerators. Several Message Passing Interfaces (MPIs) today offer support for transport plugins.

This master project available for one or two students, and the tasks would be to:

  • Select one or two open-source MPI libraries.
  • Create basic transport using standard PIO and RDMA functionality offered by Dolphin's SISCI API.
  • Optimize collective operations by using PCI Express multicast functionality.

Integrate and test with CUDA (Nvidia GPUDirect) using PCI Express peer-to-peer functionality.

  • Collaborate with the MPI open source development team to submit results to open source projects.
  • Benchmarking and analysis of the results.

Goal

Implement a PCI Express transport for a selected open source MPI library.

Learning outcome

In-depth knowledge of how to distribute and optimize highly parallel workloads such as machine learning in modern heterogeneous computing systems.

Qualifications

Good understanding of C and/or C++ programming. The student should have completed, INF3151, or equivalent.

Supervisors

  • Håkon Kvale Stensland
  • Pål Halvorsen, Simula Metropolitan
  • Jonas Markussen, Dolphin Interconnect Solutions
  • Hugo Kohmann, Dolphin Interconnect Solutions

Collaboration partners

Dolphin Interconnect Solutions

Contact person