How can we make best use of partitioning software and distribute workloads among compute nodes in a supercomputer?
Sequential partitioning software such as PaToH, and Zoltan has been use for many years. Such algorithms find good partitions in graphs by minimizing the global size of the cut thereby balancing loads in large parallel systems such as supercomputers. However, for large workloads, obtaining a good load balancing can be very slow since sequential codes can no longer keep up. Recent advances in the development of parallel partitioners have made it possible to obtain solutions much faster, which makes it possible to partition graphs with billions or even trillions of nodes in reasonable time. The goal of this thesis is to test the new software on the eX3 experimental hardware platform located at Simula.
The goal of this thesis is to benchmark modern parallel partitioners on large problems and discover the limits of the new software.
- Some C/C++ programming experience
- Experience with bash or python
- Understanding of combinatorial algorithms