Optimizing sparse matrix kernels on coprocessors

Accelerators such as the Graphic Processing Unit (GPU) have increasingly seen use by the science and engineering fields to accelerate applications and computations. However, the GPUs required developers to learn new programming methodologies in order to accelerate such applications. Therefore, in or...

Full description

Saved in:
Bibliographic Details
Main Author: Lim, Wee Siong
Other Authors: Stephen John Turner
Format: Final Year Project
Language:English
Published: 2014
Subjects:
Online Access:http://hdl.handle.net/10356/59045
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
Description
Summary:Accelerators such as the Graphic Processing Unit (GPU) have increasingly seen use by the science and engineering fields to accelerate applications and computations. However, the GPUs required developers to learn new programming methodologies in order to accelerate such applications. Therefore, in order to solve this issue, Intel released the Xeon Phi coprocessor with the goal of easing such development. In some of these scientific and engineering applications, sparse matrix vector multiplication (SpMV) kernels are sometimes the bottleneck and thus serve as the main focal point for acceleration. In this report, we looked at the performance of SpMV kernels on the coprocessor as well as various optimization methods such as the use of vectorizations, prefetching and the use of auto-tuning to achieve a higher rate of floating-point operations. With our work, we have shown that SpMV kernels could attain much better performance, especially with the use of vectorization, as compared to the basic implementation.