ANALISIS KINERJA HIGH PERFORMANCE COMPUTING PADA KOMPUTASI PARALEL UNTUK MENYELESAIKAN INTEGRAL NUMERIS
Along with the increasing need to solve various problems of numerical computation effectively and efficiently, the need for a computer system with high computing capability has increased. Computer systems with high computing capability offers the ability to integrate the resources of multiple comput...
Saved in:
Main Authors: | , |
---|---|
Format: | Theses and Dissertations NonPeerReviewed |
Published: |
[Yogyakarta] : Universitas Gadjah Mada
2011
|
Subjects: | |
Online Access: | https://repository.ugm.ac.id/91021/ http://etd.ugm.ac.id/index.php?mod=penelitian_detail&sub=PenelitianDetail&act=view&typ=html&buku_id=52860 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Universitas Gadjah Mada |
Summary: | Along with the increasing need to solve various problems of numerical
computation effectively and efficiently, the need for a computer system with high
computing capability has increased. Computer systems with high computing
capability offers the ability to integrate the resources of multiple computers to
solve a problem of numerical computing. This computer system called computer
cluster. The cluster must have the ability to perform computing process by using
parallel computing mechanism called message passing. In this study, the
implementation of message passing mechanism on a computer cluster is done by
using Open Message passing Interface (OpenMPI) application.
This study aims to analyze the performance of computer cluster using MPI
mechanism in handling the process in parallel computing based on the execution
time, speedup, and the efficiency. Parallel computing processes will be executed
with OpenMPI application to solve the problems of numerical integration using
trapezoidal method.The research method used in this study is by implementing
OpenMPI on a Linux-based cluster systems and then analyze the performance of
the system in dealing with parallel computing process to solve the numerical
integration problems by increasing the number of intervals used on a numerical
integration problem.
small number of integration intervals The results of this study showed that
cause the parallel execution time slower than sequential execution time, although
number of nodes increment done in the parallel execution. With the use of high
number of integration intervals, parallel execution is faster as the number of nodes
increase. By increasing number of intervals, the sequential execution time is
initially faster than parallel execution time, although finally reduced significantly
with the increase of number of integration interval. With limited number of nodes,
the use of send and receive routine produces a significantly faster execution time
than broadcast and reduce routine that known as dynamic message passing
communications model.
Keyword : high performance computing, cluster, message passing, parallel
computing, numerical integration, OpenMPI, execution time, speedup, efficiency |
---|