Performance analysis of Message Passing Interface collective communication on intel xeon quad-core gigabit ethernet and infiniband clusters

The performance of MPI implementation operations still presents critical issues for high performance computing systems, particularly for more advanced processor technology. Consequently, this study concentrates on benchmarking MPI implementation on multi-core architecture by measuring the performanc...

Full description

Saved in:
Bibliographic Details
Main Authors: Ismail, Roswan, Abdul Hamid, Nor Asilah Wati, Othman, Mohamed, Latip, Rohaya
Format: Article
Language:English
Published: Science Publications 2013
Online Access:http://psasir.upm.edu.my/id/eprint/30612/1/Performance%20analysis%20of%20Message%20Passing%20Interface%20collective%20communication%20on%20intel%20xeon%20quad.pdf
http://psasir.upm.edu.my/id/eprint/30612/
http://thescipub.com/abstract/10.3844/jcssp.2013.455.462
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Universiti Putra Malaysia
Language: English
id my.upm.eprints.30612
record_format eprints
spelling my.upm.eprints.306122015-09-21T02:12:13Z http://psasir.upm.edu.my/id/eprint/30612/ Performance analysis of Message Passing Interface collective communication on intel xeon quad-core gigabit ethernet and infiniband clusters Ismail, Roswan Abdul Hamid, Nor Asilah Wati Othman, Mohamed Latip, Rohaya The performance of MPI implementation operations still presents critical issues for high performance computing systems, particularly for more advanced processor technology. Consequently, this study concentrates on benchmarking MPI implementation on multi-core architecture by measuring the performance of Open MPI collective communication on Intel Xeon dual quad-core Gigabit Ethernet and InfiniBand clusters using SKaMPI. It focuses on well known collective communication routines such as MPI-Bcast, MPI-AlltoAll, MPI-Scatter and MPI-Gather. From the collection of results, MPI collective communication on InfiniBand clusters had distinctly better performance in terms of latency and throughput. The analysis indicates that the algorithm used for collective communication performed very well for all message sizes except for MPI-Bcast and MPI-Alltoall operation of inter-node communication. However, InfiniBand provides the lowest latency for all operations since it provides applications with an easy to use messaging service, compared to Gigabit Ethernet, which still requests the operating system for access to one of the server communication resources with the complex dance between an application and a network. Science Publications 2013-04 Article PeerReviewed application/pdf en http://psasir.upm.edu.my/id/eprint/30612/1/Performance%20analysis%20of%20Message%20Passing%20Interface%20collective%20communication%20on%20intel%20xeon%20quad.pdf Ismail, Roswan and Abdul Hamid, Nor Asilah Wati and Othman, Mohamed and Latip, Rohaya (2013) Performance analysis of Message Passing Interface collective communication on intel xeon quad-core gigabit ethernet and infiniband clusters. Journal of Computer Science, 9 (4). 455-462. ISSN 1549-3636; ESSN: 1552-6607 http://thescipub.com/abstract/10.3844/jcssp.2013.455.462 10.3844/jcssp.2013.455.462
institution Universiti Putra Malaysia
building UPM Library
collection Institutional Repository
continent Asia
country Malaysia
content_provider Universiti Putra Malaysia
content_source UPM Institutional Repository
url_provider http://psasir.upm.edu.my/
language English
description The performance of MPI implementation operations still presents critical issues for high performance computing systems, particularly for more advanced processor technology. Consequently, this study concentrates on benchmarking MPI implementation on multi-core architecture by measuring the performance of Open MPI collective communication on Intel Xeon dual quad-core Gigabit Ethernet and InfiniBand clusters using SKaMPI. It focuses on well known collective communication routines such as MPI-Bcast, MPI-AlltoAll, MPI-Scatter and MPI-Gather. From the collection of results, MPI collective communication on InfiniBand clusters had distinctly better performance in terms of latency and throughput. The analysis indicates that the algorithm used for collective communication performed very well for all message sizes except for MPI-Bcast and MPI-Alltoall operation of inter-node communication. However, InfiniBand provides the lowest latency for all operations since it provides applications with an easy to use messaging service, compared to Gigabit Ethernet, which still requests the operating system for access to one of the server communication resources with the complex dance between an application and a network.
format Article
author Ismail, Roswan
Abdul Hamid, Nor Asilah Wati
Othman, Mohamed
Latip, Rohaya
spellingShingle Ismail, Roswan
Abdul Hamid, Nor Asilah Wati
Othman, Mohamed
Latip, Rohaya
Performance analysis of Message Passing Interface collective communication on intel xeon quad-core gigabit ethernet and infiniband clusters
author_facet Ismail, Roswan
Abdul Hamid, Nor Asilah Wati
Othman, Mohamed
Latip, Rohaya
author_sort Ismail, Roswan
title Performance analysis of Message Passing Interface collective communication on intel xeon quad-core gigabit ethernet and infiniband clusters
title_short Performance analysis of Message Passing Interface collective communication on intel xeon quad-core gigabit ethernet and infiniband clusters
title_full Performance analysis of Message Passing Interface collective communication on intel xeon quad-core gigabit ethernet and infiniband clusters
title_fullStr Performance analysis of Message Passing Interface collective communication on intel xeon quad-core gigabit ethernet and infiniband clusters
title_full_unstemmed Performance analysis of Message Passing Interface collective communication on intel xeon quad-core gigabit ethernet and infiniband clusters
title_sort performance analysis of message passing interface collective communication on intel xeon quad-core gigabit ethernet and infiniband clusters
publisher Science Publications
publishDate 2013
url http://psasir.upm.edu.my/id/eprint/30612/1/Performance%20analysis%20of%20Message%20Passing%20Interface%20collective%20communication%20on%20intel%20xeon%20quad.pdf
http://psasir.upm.edu.my/id/eprint/30612/
http://thescipub.com/abstract/10.3844/jcssp.2013.455.462
_version_ 1643830111522258944