BIAS HANDLING IN-PROCESSING ALGORITHMS COMPARISON IN MACHINE LEARNING
Algorithmic bias is a form of bias which occurs when mathematical rules favor one set of attributes over others in relation to some target variable, like “approving” or “denying” a loan (Bantilan, 2018). Algorithmic bias surfaces when a trained machine learning model produces a systematic predict...
Saved in:
Main Author: | |
---|---|
Format: | Final Project |
Language: | Indonesia |
Online Access: | https://digilib.itb.ac.id/gdl/view/49516 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Institut Teknologi Bandung |
Language: | Indonesia |
id |
id-itb.:49516 |
---|---|
spelling |
id-itb.:495162020-09-16T21:37:29ZBIAS HANDLING IN-PROCESSING ALGORITHMS COMPARISON IN MACHINE LEARNING Ellen Indonesia Final Project bias, fairness, adversarial debiasing, prejudice remover, additive counterfactually fair, decision boundary type measurements. INSTITUT TEKNOLOGI BANDUNG https://digilib.itb.ac.id/gdl/view/49516 Algorithmic bias is a form of bias which occurs when mathematical rules favor one set of attributes over others in relation to some target variable, like “approving” or “denying” a loan (Bantilan, 2018). Algorithmic bias surfaces when a trained machine learning model produces a systematic prediction which favors a group of attributes with a target variable. In this work, we did an experiment to handle bias by using in-processing algorithms. We use adversarial debiasing, prejudice remover, additive counterfactually fair, and decision boundary fairness measurements. We tested it on COMPAS, Adult Income, German Credit Risk, and Bank Marketing datasets. Then, we did an analysis and compare the results between models with bias handling and models without bias handling. We built a library which contains the implementation for four algorithms mentioned before for ease of use of handling bias with inprocessing algorithms with Python 3. From these findings, we found out that these algorithms can reduce bias, by configuring the right hyperparameters. However, all algorithms that we tested on did not have the same performance for different datasets. Out of these algorithms, we found out that decision boundary type measurements algorithm produces highest significancy in accuracy, F1 score, and bias metrics. Meanwhile, prejudice remover algorithm produces the least significancy for all three metrics mentioned before. text |
institution |
Institut Teknologi Bandung |
building |
Institut Teknologi Bandung Library |
continent |
Asia |
country |
Indonesia Indonesia |
content_provider |
Institut Teknologi Bandung |
collection |
Digital ITB |
language |
Indonesia |
description |
Algorithmic bias is a form of bias which occurs when mathematical rules favor one
set of attributes over others in relation to some target variable, like “approving” or
“denying” a loan (Bantilan, 2018). Algorithmic bias surfaces when a trained
machine learning model produces a systematic prediction which favors a group of
attributes with a target variable.
In this work, we did an experiment to handle bias by using in-processing algorithms.
We use adversarial debiasing, prejudice remover, additive counterfactually fair, and
decision boundary fairness measurements. We tested it on COMPAS, Adult
Income, German Credit Risk, and Bank Marketing datasets. Then, we did an
analysis and compare the results between models with bias handling and models
without bias handling. We built a library which contains the implementation for
four algorithms mentioned before for ease of use of handling bias with inprocessing algorithms with Python 3.
From these findings, we found out that these algorithms can reduce bias, by
configuring the right hyperparameters. However, all algorithms that we tested on
did not have the same performance for different datasets. Out of these algorithms,
we found out that decision boundary type measurements algorithm produces
highest significancy in accuracy, F1 score, and bias metrics. Meanwhile, prejudice
remover algorithm produces the least significancy for all three metrics mentioned
before.
|
format |
Final Project |
author |
Ellen |
spellingShingle |
Ellen BIAS HANDLING IN-PROCESSING ALGORITHMS COMPARISON IN MACHINE LEARNING |
author_facet |
Ellen |
author_sort |
Ellen |
title |
BIAS HANDLING IN-PROCESSING ALGORITHMS COMPARISON IN MACHINE LEARNING |
title_short |
BIAS HANDLING IN-PROCESSING ALGORITHMS COMPARISON IN MACHINE LEARNING |
title_full |
BIAS HANDLING IN-PROCESSING ALGORITHMS COMPARISON IN MACHINE LEARNING |
title_fullStr |
BIAS HANDLING IN-PROCESSING ALGORITHMS COMPARISON IN MACHINE LEARNING |
title_full_unstemmed |
BIAS HANDLING IN-PROCESSING ALGORITHMS COMPARISON IN MACHINE LEARNING |
title_sort |
bias handling in-processing algorithms comparison in machine learning |
url |
https://digilib.itb.ac.id/gdl/view/49516 |
_version_ |
1822272062103748608 |