CycleGAN for day-to-night image translation: a comparative study
Penulis: Muhammad Feriansyah Raihan Taufiq, Laksmita Rahadianti
Informasi
JurnalIAES International Journal of Artificial Intelligence
PenerbitInstitute of Advanced Engineering and Science
Volume & EdisiVol. 14,Edisi 3
Halaman2347-2357
Tahun Publikasi2025
ISSN20894872
Jenis SumberGoogle Scholar
Abstrak
Computer vision tasks often fail when applied to night images, because the models are usually trained using clear daytime images only. This creates the need to augment the data with more nighttime image for training to increase robustness. In this study, we consider day-to-night image translation using both traditional image processing approaches and deep learning models. This study employs a hybrid framework of traditional image processing followed by a CycleGAN-based deep learning model for day-to-night image translation. We then conduct a comparative study on various generator architectures in our CycleGAN model. This research compares four different CycleGAN models; ie, the orginal Cy-cleGAN, feature pyramid network (FPN) based CycleGAN, the original U-Net vision transformer based UVCGAN, plus a modified UVCGAN with additional edge loss. The experimental results show that the orginal UVCGAN obtains an Fréchet inception distance (FID) score of 16.68 and structural similarity index measure (SSIM) of 0.42, leading in terms of FID. Meanwhile, FPN-CycleGAN obtains an FID score of 104.46 and SSIM score of 0.44, leading in terms of SSIM. Considering FPN-CycleGAN’s bad FID score and visual observation, we conclude that UVCGAN is more effective in generating synthetic nighttime images.
Dokumen & Tautan
