Explainable clustering: Methods, challenges, and future opportunities

Penulis: Ridhwan Dewoprabowo, Lim Yohanes Stefanus, Ari Saptawijaya
Informasi
JurnalJournal of Intelligent Systems
PenerbitDe Gruyter, Walter de Gruyter GmbH
Volume & EdisiVol. 34,Edisi 1
Halaman20240477
Tahun Publikasi2025
ISSN2191026X
Jenis SumberGoogle Scholar
Abstrak
In recent years, artificial intelligence (AI) has increasingly relied on subsymbolic techniques like machine learning (ML). Despite their widespread use, these techniques often lack transparency, leading to potential distrust. The field of eXplainable artificial intelligence (XAI) addresses this issue by making intelligent systems observable, explainable, and accountable. While much research has focused on explainability in supervised learning, there is a growing need to explore it in an unsupervised setting, especially given the challenges of unlabeled data in high volume. Clustering is an unsupervised ML strategy that groups data based on similarity. However, its reasoning often lacks transparency. This article reviews state-of-the-art explainable and/or interpretable clustering methods, categorizing them based on explanation generation techniques and highlighting the importance of making clustering results …
Dokumen & Tautan

© 2025 Universitas Indonesia. Seluruh hak cipta dilindungi.