skip to main content

Perbandingan Convolutional Neural Network VGG16 dan ResNet34 pada Sistem Klasifikasi Sampah Botol

1Department of Computer Science, Universitas Brawijaya, Indonesia

2Faculty of Computer Science, Universitas Brawijaya, Jl. Veteran No.8, Ketawanggede, Kec. Lowokwaru, Kota Malang, Indonesia 65145, Indonesia

Received: 7 Jan 2021; Published: 31 Jan 2022.
Open Access Copyright (c) 2021 Jurnal Teknologi dan Sistem Komputer under http://creativecommons.org/licenses/by-sa/4.0.

Citation Format:
Abstract
Hampir semua botol minuman kemasan yang beredar di masyarakat terbuat dari bahan plastik dikarenakan plastik merupakan bahan yang murah dan mudah dibentuk. Plastik adalah bahan non-organik yang sulit diuraikan sehingga botol plastik dapat menyebabkan pencemaran lingkungan. Sehingga diperlukan suatu solusi yang efektif untuk mengatasi kerusakan lingkungan yang disebabkan oleh sampah botol plastik. Salah satu solusi yag dapat dilakukan yaitu melakukan klasifikasi dan daur ulang sampah botol plastik. Pengklasifikasian sampah botol plastik dan sampah botol bukan plastik ke dalam kategori yang ditentukan sesuai dengan persyaratan kemudian didaur ulang agar dapat diolah kembali agar tidak merusak lingkungan. Artikel ini mengusulkan model VGG16 dan ResNet34 berbasis deep learning menggunakan CNN (Convolutional Neural Network) untuk mengidentifikasi dan mengklasifikasikan sampah botol. Berdasarkan hasil pengujian menggunakan Convolutional Neural Network, arsitektur VGG16 memiliki akurasi sebesar 90% dan ResNet34 memiliki akurasi sebesar 50% pada klasifikasi botol plastik dan bukan botol plastik. Masing-masing arsitektur menggunakan 10 epoch, 32 batch, 1655 gambar.
Fulltext Email colleagues
Keywords: klasifikasi; CNN; VGG16; ResNet34

Article Metrics:

  1. S. A. Ghadage and M. N. A. Doshi, “IoT based garbage management (Monitor and acknowledgment) system: A review,” Proc. Int. Conf. Intell. Sustain. Syst. ICISS 2017, no. Iciss, pp. 642–644, 2018, doi: 10.1109/ISS1.2017.8389250
  2. A. N. Kokoulin, A. I. Tur, and A. A. Yuzhakov, “Convolutional neural networks application in plastic waste recognition and sorting,” Proceedings of the 2018 IEEE Conference of Russian Young Researchers in Electrical and Electronic Engineering, ElConRus 2018, vol. 2018-January. pp. 1094–1098, 2018, doi: 10.1109/EIConRus.2018.8317281
  3. S. Budiman, D. Suryasaputra, and D. Ristianti, “Fotodegradasi Zat Warna Tekstil dengan Fotokatalis TiO2, Al2O3 dan H2O2,” Conf. Pros., no. April, 2014
  4. K. S. Hulyalkar S., Deshpande R., Makode K., “Implementation of Smartbin Using Convolutional Neural Networks,” Int. Res. J. Eng. Technol., vol. 5, no. 4, pp. 3352–3358, 2018, [Online]. Available: www.irjet.net
  5. D. M. S. Arsa and A. A. N. H. Susila, “VGG16 in Batik Classification based on Random Forest,” Proc. 2019 Int. Conf. Inf. Manag. Technol. ICIMTech 2019, no. August, pp. 295–299, 2019, doi: 10.1109/ICIMTech.2019.8843844
  6. H. Wang, “Garbage recognition and classification system based on convolutional neural network vgg16,” Proc. - 2020 3rd Int. Conf. Adv. Electron. Mater. Comput. Softw. Eng. AEMCSE 2020, pp. 252–255, 2020, doi: 10.1109/AEMCSE50948.2020.00061
  7. T. Ogawa, H. Lu, A. Watanabe, I. Omura, and T. Kamiya, “Identification of normal and abnormal from ultrasound images of power devices using VGG16,” no. Iccas, pp. 415–418, 2020, doi: 10.23919/iccas50221.2020.9268275
  8. H. Li, J. Li, X. Guan, B. Liang, Y. Lai, and X. Luo, “Research on Overfitting of Deep Learning,” Proc. - 2019 15th Int. Conf. Comput. Intell. Secur. CIS 2019, pp. 78–81, 2019, doi: 10.1109/CIS.2019.00025
  9. K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., vol. 2016-December, pp. 770–778, 2016, doi: 10.1109/CVPR.2016.90
  10. B. Lin, J. Xle, C. Li, and Y. Qu, “Deeptongue: Tongue segmentation via resnet,” ICASSP, IEEE Int. Conf. Acoust. Speech Signal Process. - Proc., vol. 2018-April, pp. 1035–1039, 2018, doi: 10.1109/ICASSP.2018.8462650
  11. K. Zhang, M. Sun, T. X. Han, X. Yuan, L. Guo, and T. Liu, “Residual Networks of Residual Networks: Multilevel Residual Networks,” IEEE Trans. Circuits Syst. Video Technol., vol. 28, no. 6, pp. 1303–1314, 2018, doi: 10.1109/TCSVT.2017.2654543
  12. S. H. Lee, S. Hosseini, H. J. Kwon, J. Moon, H. Il Koo, and N. I. Cho, “Age and gender estimation using deep residual learning network,” 2018 Int. Work. Adv. Image Technol. IWAIT 2018, pp. 1–3, 2018, doi: 10.1109/IWAIT.2018.8369763
  13. A. Demir, F. Yilmaz, and O. Kose, “Early detection of skin cancer using deep learning architectures: Resnet-101 and inception-v3,” TIPTEKNO 2019 - Tip Teknol. Kongresi, vol. 2019-January, pp. 2019–2022, 2019, doi: 10.1109/TIPTEKNO47231.2019.8972045
  14. A. Mahajan and S. Chaudhary, “Categorical Image Classification Based on Representational Deep Network (RESNET),” Proc. 3rd Int. Conf. Electron. Commun. Aerosp. Technol. ICECA 2019, pp. 327–330, 2019, doi: 10.1109/ICECA.2019.8822133
  15. V. Atliha and D. Sesok, “Comparison of VGG and ResNet used as Encoders for Image Captioning,” 2020 IEEE Open Conf. Electr. Electron. Inf. Sci. eStream 2020 - Proc., pp. 0–3, 2020, doi: 10.1109/eStream50540.2020.9108880
  16. K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” 3rd Int. Conf. Learn. Represent. ICLR 2015 - Conf. Track Proc., pp. 1–14, 2015

Last update:

No citation recorded.

Last update: 2024-04-20 01:44:28

No citation recorded.