Optimizing sparse matrix kernels on coprocessors
Accelerators such as the Graphic Processing Unit (GPU) have increasingly seen use by the science and engineering fields to accelerate applications and computations. However, the GPUs required developers to learn new programming methodologies in order to accelerate such applications. Therefore, in or...
Saved in:
Main Author: | |
---|---|
Other Authors: | |
Format: | Final Year Project |
Language: | English |
Published: |
2014
|
Subjects: | |
Online Access: | http://hdl.handle.net/10356/59045 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
id |
sg-ntu-dr.10356-59045 |
---|---|
record_format |
dspace |
spelling |
sg-ntu-dr.10356-590452023-03-03T20:35:14Z Optimizing sparse matrix kernels on coprocessors Lim, Wee Siong Stephen John Turner School of Computer Engineering Parallel and Distributed Computing Centre DRNTU::Engineering::Computer science and engineering::Mathematics of computing::Numerical analysis DRNTU::Engineering::Computer science and engineering::Data::Data structures DRNTU::Engineering::Mathematics and analysis Accelerators such as the Graphic Processing Unit (GPU) have increasingly seen use by the science and engineering fields to accelerate applications and computations. However, the GPUs required developers to learn new programming methodologies in order to accelerate such applications. Therefore, in order to solve this issue, Intel released the Xeon Phi coprocessor with the goal of easing such development. In some of these scientific and engineering applications, sparse matrix vector multiplication (SpMV) kernels are sometimes the bottleneck and thus serve as the main focal point for acceleration. In this report, we looked at the performance of SpMV kernels on the coprocessor as well as various optimization methods such as the use of vectorizations, prefetching and the use of auto-tuning to achieve a higher rate of floating-point operations. With our work, we have shown that SpMV kernels could attain much better performance, especially with the use of vectorization, as compared to the basic implementation. Bachelor of Engineering (Computer Science) 2014-04-22T01:27:40Z 2014-04-22T01:27:40Z 2014 2014 Final Year Project (FYP) http://hdl.handle.net/10356/59045 en Nanyang Technological University 71 p. application/pdf |
institution |
Nanyang Technological University |
building |
NTU Library |
continent |
Asia |
country |
Singapore Singapore |
content_provider |
NTU Library |
collection |
DR-NTU |
language |
English |
topic |
DRNTU::Engineering::Computer science and engineering::Mathematics of computing::Numerical analysis DRNTU::Engineering::Computer science and engineering::Data::Data structures DRNTU::Engineering::Mathematics and analysis |
spellingShingle |
DRNTU::Engineering::Computer science and engineering::Mathematics of computing::Numerical analysis DRNTU::Engineering::Computer science and engineering::Data::Data structures DRNTU::Engineering::Mathematics and analysis Lim, Wee Siong Optimizing sparse matrix kernels on coprocessors |
description |
Accelerators such as the Graphic Processing Unit (GPU) have increasingly seen use by the science and engineering fields to accelerate applications and computations. However, the GPUs required developers to learn new programming methodologies in order to accelerate such applications. Therefore, in order to solve this issue, Intel released the Xeon Phi coprocessor with the goal of
easing such development. In some of these scientific and engineering applications, sparse matrix vector multiplication (SpMV) kernels are sometimes the bottleneck and thus serve as the main focal point for acceleration. In this report, we looked at the performance of SpMV kernels on the coprocessor as well as various optimization methods such as the use of vectorizations, prefetching and the use of auto-tuning to achieve a higher rate of floating-point operations. With our work, we have shown that SpMV kernels could attain much better performance, especially with the use of vectorization, as compared to the basic implementation. |
author2 |
Stephen John Turner |
author_facet |
Stephen John Turner Lim, Wee Siong |
format |
Final Year Project |
author |
Lim, Wee Siong |
author_sort |
Lim, Wee Siong |
title |
Optimizing sparse matrix kernels on coprocessors |
title_short |
Optimizing sparse matrix kernels on coprocessors |
title_full |
Optimizing sparse matrix kernels on coprocessors |
title_fullStr |
Optimizing sparse matrix kernels on coprocessors |
title_full_unstemmed |
Optimizing sparse matrix kernels on coprocessors |
title_sort |
optimizing sparse matrix kernels on coprocessors |
publishDate |
2014 |
url |
http://hdl.handle.net/10356/59045 |
_version_ |
1759854622648827904 |