The performance of BERT as data representation of text clustering

Penulis: Subakti, Alvin; Murfi, Hendri; Hariadi, Nora
Informasi
JurnalJournal of Big Data, Journal of big Data
PenerbitSpringer Science and Business Media Deutschland GmbH, Springer International Publishing
Volume & EdisiVol. 9,Edisi 1
Halaman -
Tahun Publikasi2022
ISSN21961115
eISSN2196-1115
Jenis SumberScopus
Sitasi
Scopus: 123
Google Scholar: 123
PubMed: 123
Abstrak
Text clustering is the task of grouping a set of texts so that text in the same group will be more similar than those from a different group. The process of grouping text manually requires a significant amount of time and labor. Therefore, automation utilizing machine learning is necessary. One of the most frequently used method to represent textual data is Term Frequency Inverse Document Frequency (TFIDF). However, TFIDF cannot consider the position and context of a word in a sentence. Bidirectional Encoder Representation from Transformers (BERT) model can produce text representation that incorporates the position and context of a word in a sentence. This research analyzed the performance of the BERT model as data representation for text. Moreover, various feature extraction and normalization methods are also applied for the data representation of the BERT model. To examine the performances of BERT, we use four clustering algorithms, i.e., k-means clustering, eigenspace-based fuzzy c-means, deep embedded clustering, and improved deep embedded clustering. Our simulations show that BERT outperforms TFIDF method in 28 out of 36 metrics. Furthermore, different feature extraction and normalization produced varied performances. The usage of these feature extraction and normalization must be altered depending on the text clustering algorithm used. © 2022, The Author(s).
Dokumen & Tautan

© 2025 Universitas Indonesia. Seluruh hak cipta dilindungi.