Loop unroll optimization for GPU implementation

This report presents the process of implementation and optimization of two image resize algorithms namely, Bilinear and Bicubic Interpolation. The purpose of the optimization seeks to improve execution time and is primarily done with the use of Nvidia’s Compute Unified Device Architecture (CUDA). Bo...

Full description

Saved in:
Bibliographic Details
Main Author: Wu, Jianghua.
Other Authors: School of Computer Engineering
Format: Final Year Project
Language:English
Published: 2012
Subjects:
Online Access:http://hdl.handle.net/10356/48561
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
Description
Summary:This report presents the process of implementation and optimization of two image resize algorithms namely, Bilinear and Bicubic Interpolation. The purpose of the optimization seeks to improve execution time and is primarily done with the use of Nvidia’s Compute Unified Device Architecture (CUDA). Both Algorithms are implemented in C++ before subsequent CUDA codes are added. The challenge in the project is to pick up CUDA programming and also the requirement of understanding the math involved before converting into algorithms. It was evident how the integration of CUDA, by substituting the use of loops in computations with threads running in parallel demonstrated a significant speed up in execution time. There is still room for code refactoring, better CUDA implementation and use of more powerful of Graphics Processing Unit (GPU) that will see improvements to both design and greater optimization of the developed application. In conclusion, the project has shown that under certain conditions, the leveraging the power of by the use of CUDA is a viable optimization tool in Graphics Processing Algorithms such as the ones mentioned above.