Performance Analysis Of Class Rebalancing Self-Training Framework For Imbalanced Semi-Supervised Learning

Authors

  • Alvaro Septra Dominggo Nauw Telkom University
  • Suryo Adhi Wibowo Telkom University
  • Casi Setianingsih Telkom University

Abstract

This research aims to analyze the effectiveness of
the Class-Rebalancing Self-Training (CReST) method in semisupervised
learning (SSL) on class-imbalanced data. The study
uses the CIFAR 10 long-tailed dataset to test the performance of
SSL with CReST using Python programming language on the
Google Colab platform. The results showed that CReST
effectively reduces pseudo-labels in the majority class and
increases recall in the minority class, with the best performance
achieved at Generation 16. However, there was a decrease in
Average Accuracy Recall per Class after Generation 16. The
study suggests addressing the over-sampling issue and exploring
the application of the CReST framework in other areas of
machine learning and AI.


Kata kunci— CReST, Semi-Supervised Learning, imbalance
data, pseudo label, Semi-Supervised Learning Generation

References

X. Zhu and A. B. Goldberg,

learning,= Synthesis lectures on artificial

intelligence and machine learning, vol. 3, no. 1, pp. 1-130,

T. O. Ayodele,

algorithms,= New advances in machine learning, vol. 3, pp.

-48, 2010.

M. C. du Plessis and M. Sugiyama,

learning of class balance under class-prior change

by distribution matching,= Neural Networks, vol. 50, pp.

-119, 2014.

C. Wei, K. Sohn, C. Mellina, A. L. Yuille, and F.

Yang,

Framework for Imbalanced Semi-Supervised Learning,=

CoRR, vol. abs/2102.09559, 2021, [Online]. Available:

https://arxiv.org/abs/2102.09559

E. Bauer and R. Kohavi,

in machine learning,= The Journal of Machine Learning

Research, vol. 8, pp. 1825-1832, 2007.

A. Krizhevsky, G. Hinton, and others,

multiple layers of features from tiny images,= 2009.

T. Wu, X. Wang, Q. Ye, and Y. Liu,

recognition in large-scale datasets,= Proceedings of the

European Conference on Computer Vision (ECCV), pp. 816-

, 2018.

E. Bisong and E. Bisong,

Building machine learning and deep learning models on

google cloud platform: a comprehensive guide for beginners,

pp. 59-64, 2019.

K. Sohn, R. Roelofs, B. Zoph, Q. v Le, and J. Shlens,

Consistency and Confidence,= arXiv preprint

arXiv:2001.07685, 2020.

S. Zagoruyko and N. Komodakis,

Networks,= in Proceedings of the IEEE Conference on

Computer Vision and Pattern Recognition, 2016, pp. 778-

S. Ioffe and C. Szegedy,

Accelerating deep network training by reducing internal

covariate shift,= arXiv preprint arXiv:1502.03167, 2015.

A. L. Maas, A. Y. Hannun, and A. Y. Ng,

nonlinearities improve neural network acoustic models,=

Proc. ICML, vol. 30, p. 3, 2013.

Z. Zhou, L. Chen, J. Sun, and X. Wang,

Rebalancing for Long-Tailed Recognition,= IEEE Trans

Pattern Anal Mach Intell, 2020.

X. Zhang, Z. Fang, Y. Wen, Z. Li, and Y. Qiao,

Nov. 2016, [Online]. Available:

http://arxiv.org/abs/1611.08976

..

Downloads

Published

2023-11-01

Issue

Section

Program Studi S1 Teknik Telekomunikasi