Loop unroll optimization for GPU implementation

This report presents the process of implementation and optimization of two image resize algorithms namely, Bilinear and Bicubic Interpolation. The purpose of the optimization seeks to improve execution time and is primarily done with the use of Nvidia’s Compute Unified Device Architecture (CUDA). Bo...

全面介紹

Saved in:
書目詳細資料
主要作者: Wu, Jianghua.
其他作者: School of Computer Engineering
格式: Final Year Project
語言:English
出版: 2012
主題:
在線閱讀:http://hdl.handle.net/10356/48561
標簽: 添加標簽
沒有標簽, 成為第一個標記此記錄!
實物特徵
總結:This report presents the process of implementation and optimization of two image resize algorithms namely, Bilinear and Bicubic Interpolation. The purpose of the optimization seeks to improve execution time and is primarily done with the use of Nvidia’s Compute Unified Device Architecture (CUDA). Both Algorithms are implemented in C++ before subsequent CUDA codes are added. The challenge in the project is to pick up CUDA programming and also the requirement of understanding the math involved before converting into algorithms. It was evident how the integration of CUDA, by substituting the use of loops in computations with threads running in parallel demonstrated a significant speed up in execution time. There is still room for code refactoring, better CUDA implementation and use of more powerful of Graphics Processing Unit (GPU) that will see improvements to both design and greater optimization of the developed application. In conclusion, the project has shown that under certain conditions, the leveraging the power of by the use of CUDA is a viable optimization tool in Graphics Processing Algorithms such as the ones mentioned above.