Particle and Flames in Radiative and Magnetic Flows

October 11-15, 2010
Lyon, France

Contribution of Glib Ivashkevych


Python library for simulations of ensembles of particles on GPU


"Stellar structure and evolution"


gpgpu, chaotic dynamics, escape, star cluster


Simulations of ensembles of particles in chaotic dynamics is an appropriate candidate for implementation on GPU. With or without interactions such sim- ulations are well scalable to fit the highly parallel graphic processor. In this report we want to present the library which implements most common routines for simulations of particle ensembles. Although, in ”chaotic” formulation escape problem does not contain inter- actions between particles, we implement the suitable algorithms to perform simulations of interactive particles. Again, these routines are focused on the study of escape and, thus, escape rates and distributions of escaping particles are the main targets of computation schemes. Simulation of interactive par- ticles is implemented as direct gravitational N–body scheme, with controling errors by calculating the difference in energies at the start and at the end of calculation. Initial distribution of particles could follow the one of the prede- fined ensembles (e.g., uniform or ”point” distribution), while masses of particles could be generated according to user provided mass distribution. We control errors by comparing the energies at the start and at the end of simulation (for both interactive and non–interactive ensembles). Such implementation is not applicable to simulations of large systems, but is still suitable for simulation of systems with appropriately small number of particles, which depends on the hardware – i.e., amount of GPU memory, number of GPUs, etc. With this, proposed library could be suitable for astrophysical simulations, for example, to simulate the decay of star clusters, both in the N–body and mean field formulations, and for research of chaos in wide range of systems. Some hints to combine GPU technologies with OpenMP and MPI are also presented, e.g. non–interactive ensemble could be more or less painlesly divided between different GPUs in the case of OpenMP or between different hosts in the case of MPI. However, N–body simulations are more complex in this sense and are currently implemented only for single GPU.