Collecting and analyzing I/O patterns for data intensive applications
As the reliance on computer systems increases, so does complexity of the system and the data size. In order to maintain the efficiency of systems and enhance its scalability, different optimization techniques can be employed. This project looks into the locality of reference of applications, in hop...
Saved in:
Main Author: | |
---|---|
Other Authors: | |
Format: | Final Year Project |
Language: | English |
Published: |
2012
|
Subjects: | |
Online Access: | http://hdl.handle.net/10356/48601 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Summary: | As the reliance on computer systems increases, so does complexity of the system and the data size. In order to maintain the efficiency of systems and enhance its scalability, different optimization techniques can be employed. This project looks into the locality of reference of applications, in hope to optimize the performance by administering data within faster speed memory like caches.
This project looks into the use of Linux blktrace and blkparse utility, which captures the block input/output traces from different software applications. The analysis is performed on the Hadoop Framework which establishes connection between computer systems to execute tasks in parallel.
Preliminary of the analysis dealt with familiarization of the blktrace and blkparse utility. Since the blktrace utility captures all the traces of block input/output that occurred in the system in a specific period, it is essential to filter only those traces relevant to the analysis. In the process of analyzing the data, several different approaches were taken to retrieve and represent the result with increasing accuracy. Due to the inconsistency between a file size and the block input/output read, different file systems were also analyzed to verify this observation.
The result show that the current method of filtering the block input/output traces from a specific program included overheads that made the size of the trace larger than the original file size.
Analysis on the wordcount function of Hadoop shows that the file access contains the characteristic of spatial locality. Most of each subsequent block access is found to be relatively fast; in the range of 1-4 milliseconds. The analysis on the Database Test Suite– 2 shows that MySQL has a random access behavior on its block I/O accesses. |
---|