Hardware Acceleration

Hardware acceleration of high-performance computing (HPC) codes has become common practice in data centers and supercomputing facilities, where graphics processors (GPUs) and reconfigurable hardware (FPGAs) may allow very significant increases in performance. However, their actual impact on performance greatly depends on each particular application and the possibilities it offers for parallelization on these devices.

The regular architectures of GPUs and of FPGAs configured with many identical cores adapt well to computations on structured data sets (i.e. multi-dimensional arrays) where access patterns are predictable, and the development of parallel software for these applications is well supported through languages like CUDA (GPUs) or OpenCL (GPUs and FPGAs).

However, when unstructured data sets (i.e. graphs) are processed, the irregular nature of computations and memory access patterns poses serious difficulties which, in some cases, are best approached with custom FPGA designs by hand. In these cases, the development of domain-specific design tools and languages is the subject of ongoing research. In particular, the LSI has long time experience in the development of FPGA design methods and tools for the simulation of computational fluid dynamics (CFD) codes on unstructured meshes used by the aeronautics industry.

Hardware Acceleration

Currently, the LSI research on GPU- and FPGA-based acceleration is also exploring possible benefits when executing scientific codes that process espectral images obtained from radiotelescopes. This research is carried out within the AMIGA series of projects (AMIGA-5, AMIGA-6, AMIGA-7).