Rank-Order Weighting of Web Attributes for Website Evaluation

The rapid growth of web applications increases the need to evaluate web applications objectively. In the past few years some works like WebQEM has objectively evaluated the web applications. However, still weighting web attributes which is one step of evaluation of web applications is completely...

Full description

Saved in:
Bibliographic Details
Main Author: Saeid, Mehri
Format: Thesis
Language:English
English
Published: 2008
Online Access:http://psasir.upm.edu.my/id/eprint/7131/1/FSKTM_2008_21a.pdf
http://psasir.upm.edu.my/id/eprint/7131/
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Universiti Putra Malaysia
Language: English
English
Description
Summary:The rapid growth of web applications increases the need to evaluate web applications objectively. In the past few years some works like WebQEM has objectively evaluated the web applications. However, still weighting web attributes which is one step of evaluation of web applications is completely subjective, depending mostly on experts’ judgments. A two-step weighting approach is proposed to solve attribute weighting problem in evaluating web applications in different domains. The approach divides the weighting step into two steps which are ranking and then weighting. Firstly, the web attributes are ranked according to the order of user expectations in web domains, and secondly using rank-order weighting methods (Rank-sum weighting method (RS), Reciprocal of the Ranks weighting method (RR), and Rank-Order Centroid weighting method (ROC)) to elicit weight from the ranked attributes. A simulation is conducted to compare rank-order weighting methods (RR, RS, and ROC) with the simulated experts. The experts’ judgments are simulated in the simulation, assuming that for some particular web attributes, experts weight the attributes completely subjective (randomly without prior ranking). Also for the mentioned attributes, the proposed two-step weighting approach is used. Two kinds of comparison are done; comparison on weights and comparison on quality scores. Results from simulation are used in comparison to determine which method (RR, RS, and ROC) can be a surrogate for experts’ judgments. From the results of comparison, Rank-sum weighting method (RS) shows 90% of the times completely comply with experts' judgements in terms of rank preservation compared to RR and ROC. This shows that Rank-sum weighting method (RS) is the best method. Rank-sum weighting method (RS) also has very small ValueLoss compared to RR and ROC. From this, it can be said that, using RS weights will give the particular web application a quality score that is not much difference from experts’ judgments. Furthermore, 100% of times RS is the best method (compare to RR and ROC) to conform to the experts in terms of choosing the best web application quality. Thus, RS is suggested as a good surrogate for Experts’ weights for the attributes when evaluating some web applications.