|Authors||J. Langguth, H. Arevalo, K. G. Hustad and X. Cai|
|Title||Towards Detailed Real-Time Simulations of Cardiac Arrhythmia|
|Project(s)||Meeting Exascale Computing with Source-to-Source Compilers|
|Publication Type||Proceedings, refereed|
|Year of Publication||2019|
|Conference Name||Computing in Cardiology|
Recent advances in personalized arrhythmia risk pre- diction show that computational models can provide not only safer but also more accurate results than invasive pro- cedures. However, biophysically accurate simulations re- quire solving linear systems over fine meshes and time res- olutions, which can take hours or even days. This limits the use of such simulations in the clinic where diagnosis and treatment planning can be time sensitive, even if it is just for the reason of operation schedules. Furthermore, the non-interactive, non-intuitive way of accessing simula- tions and their results makes it hard to study these collab- oratively. Overcoming these limitations requires speeding up computations from hours to seconds, which requires a massive increase in computational capabilities.
Fortunately, the cost of computing has fallen dramati- cally in the past decade. A prominent reason for this is the recent introduction of manycore processors such as GPUs, which by now power the majority of the world’s leading supercomputers. These devices owe their success to the fact that they are optimized for massively parallel work- loads, such as applying similar ODE kernel computations to millions of mesh elements in scientific computing ap- plications. Unlike CPUs, which are typically optimized for sequential performance, this allows GPU architectures to dedicate more transistors to performing computations, thereby increasing parallel speed and energy efficiency.