Evaluasi AI Bias and Fairness dalam Akuisisi Agen Penjualan Perbankan (Agen BRIlink-Bank Rakyat Indonesia)

Authors

  • Berliana Shafa Wardani Telkom University
  • Siti Sa'adah Telkom University
  • Dade Nurjanah Telkom University

Abstract

Membangun loyalitas dengan para nasabah dengan melakukan pengambilan kepemilikian perusahaan atau aset (akuisisi) untuk menjadi pihak yang diajak bekerja sama (agen) bank disebut dengan akuisisi agen. Fitur-fitur penting nasabah dipertimbangkan dalam proses akuisisi. Penelitian ini dilakukan dengan dataset BRIlink yang merupakan penerapan akuisisi agen penjualan perbankan di Bank Rakyat Indonesia (BRI). Dengan banyaknya data nasabah BRI dapat menimbulkan keberagaman data yang memungkinkan menyebabkan hasil akuisisi agen tidak merata. Dengan ini, diperlukan algoritma pendeteksi dan mitigasi bias untuk mencapai fairness. AI fairness 360 (AIF 360) merupakan sebuah toolkit yang menyediakan algoritma deteksi dan mitigasi bias. Algoritma mitigasi bias pada AIF 360 dibagi menjadi tiga proses, yaitu: reweighing dan learning fair representation pada tahap pre-processing, prejudice remover dan adversarial debasing pada tahap in-processing, serta calibrated equalized odds dan reject option classification pada tahap post-processing. Luaran penelitian ini berupa hasil perbandingan perhitungan deteksi bias dengan disparate impact (DI) dan statistical parity difference (SPD) sebelum dan sesudah mitigasi. Algoritma reweighing menghasilkan rata-rata DI 0,8% dan SPD 0,102% yang menunjukkan berhasilnya mitigasi, tetapi nilai AUC pada reweighing berkurang. Berbeda dengan reweighing, adversarial debiasing dan reject option classification dapat memitigasi bias sembari mempertahankan nilai AUC. Dilakukannya penelitian ini dapat membantu akuisisi agen BRIlink secara lebih adil.

Kata kunci— akuisisi agen, bias, fairness, mitigasi, BRIlink

References

J. Buolamwini,

J. M. Zhang and M. Harman, <8Ignorance and Prejudice9 in software fairness,= in Proceedings - International Conference on Software Engineering, May 2021, pp. 1436-1447.

https://doi.org/10.1109/ICSE43902.2021.00129.

J. Chen, N. Kallus, X. Mao, G. Svacha, and M. Udell,

-348. https://doi.org/10.1145/3287560.3287594.

K. Martinus and B. Reilly,

https://doi.org/10.1016/j.jrurstud.2020.08.039.

L. Doornkamp, L. D. van der Pol, S. Groeneveld, J.

Mesman, J. J. Endendijk, and M. G. Groeneveld,

role of gender stereotypical beliefs,= Teach Teach

Educ, vol. 118, Oct. 2022,

https://doi.org/10.1016/j.tate.2022.103826.

T. Burch,

System: Beyond Black-White Disparities in

Sentencing,= Journal of Empirical Legal Studies, vol.

, no. 3, pp. 395-420, Sep. 2015, doi: 10.1111/jels.12077.

R. K. E. Bellamy et al.,

Extensible Toolkit for Detecting, Understanding, and

Mitigating Unwanted Algorithmic Bias,= Oct. 2018,

[Online]. Available: http://arxiv.org/abs/1810.

N. Mehrabi, F. Morstatter, N. Saxena, K. Lerman,

and A. Galstyan,

Machine Learning,= Aug. 2019, [Online]. Available:

http://arxiv.org/abs/1908.09635

M. Feldman, S. Friedler, J. Moeller, C. Scheidegger,

and S. Venkatasubramanian,

removing DI,= Dec. 2014, [Online]. Available:

http://arxiv.org/abs/1412.3756

S. Raza,

diagnosing, and mitigating health disparities in

hospital readmission,= Healthcare Analytics, vol. 2,p. 100100, Nov. 2022,

https://doi.org/10.1016/j.health.2022.100100.

P. Mosteiro, J. Kuiper, J. Masthoff, F. Scheepers, and

M. Spruit,

Models for Mental Health,= Information

(Switzerland), vol. 13, no. 5, May 2022,

https://doi.org/10.3390/info13050237.

P. Cerrato, J. Halamka, and M. Pencina,

for developing a platform that evaluates algorithmic

equity and accuracy,= BMJ Health and Care

Informatics, vol. 29, no. 1. BMJ Publishing Group,

Apr. 11, 2022. https://doi.org/10.1136/bmjhci-2021- 100423.

T. M. Mitchell,

Generalizations by The Need for Biases in Learning

Generalizations,= 1980.

D. O. Blessed and L. Liu,

Fairness Metrics and Unfairness Mitigation

Algorithms contribute to Ethical Learning Analytics?

Identification of miRNA sponge network and

modules in human cancers View project Estimating

heterogeneous treatment effects by balancing

heterogeneity and fitness View project=, doi:

13140/RG.2.2.20988.67204.

T. Chen and T. He,

X. Y. Liew, N. Hameed, and J. Clos,

investigation of XGBoost-based algorithm for breast

cancer classification,= Machine Learning with

Applications, vol. 6, p. 100154, Dec. 2021, doi:

1016/j.mlwa.2021.100154.

F. Kamiran and T. Calders,

techniques for classification without discrimination,=

Knowl Inf Syst, vol. 33, no. 1, pp. 1-33, 2012, doi:

1007/s10115-011-0463-8.

R. Zemel, Y. ( Ledell, ) Wu, K. Swersky, T. Pitassi,

and C. Dwork,

T. Kamishima, S. Akaho, H. Asoh, and J. Sakuma,

Prejudice Remover Regularizer.= [Online].

Available: http://www.kamishima.net

F. Kamiran, A. Karim, and X. Zhang,

theory for discrimination-aware classification,= in

Proceedings - IEEE International Conference on

Data Mining, ICDM, 2012, pp. 924-929. doi: 10.1109/ICDM.2012.45.

G. Pleiss, M. Raghavan, F. Wu, J. Kleinberg, and K.

Q. Weinberger,

Downloads

Published

2023-12-27

Issue

Section

Program Studi S1 Informatika