Evaluating Defect Prediction using a Massive Set of Metrics
To evaluate the performance of a within-project defect prediction approach, people normally use precision, recall, and F-measure scores. However, in machine learning literature, there are a large number of evaluation metrics to evaluate the performance of an algorithm, (e.g., Matthews Correlation Co...
Saved in:
Main Authors: | XUAN, Xiao, David LO, XIA, Xin, TIAN, Yuan |
---|---|
Format: | text |
Language: | English |
Published: |
Institutional Knowledge at Singapore Management University
2015
|
Subjects: | |
Online Access: | https://ink.library.smu.edu.sg/sis_research/3081 https://ink.library.smu.edu.sg/context/sis_research/article/4081/viewcontent/Defect_prediction_metrics_xuan_2015_afv.pdf |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Singapore Management University |
Language: | English |
Similar Items
-
Revisiting supervised and unsupervised models for effort-aware just-in-time defect prediction
by: HUANG, Qiao, et al.
Published: (2018) -
HYDRA: Massively compositional model for cross-project defect prediction
by: XIA, Xin, et al.
Published: (2016) -
Autospearman: Automatically mitigating correlated software metrics for interpreting defect models
by: JIARPAKDEE, Jirayus, et al.
Published: (2018) -
An Empirical Study of Classifier Combination on Cross-Project Defect Prediction
by: ZHANG, Yun, et al.
Published: (2015) -
A comparison between software design and code metrics for the prediction of software fault content
by: Zhao, M., et al.
Published: (2014)