Klasifikasi klon teh berbasis deep CNN dengan residual dan densely connections

Tea clone classification using deep CNN with residual and densely connections

*Ade Ramdan scopus  -  Lembaga Ilmu Pengetahuan Indonesia, Indonesia
Vicky Zilvan orcid scopus  -  Lembaga Ilmu Pengetahuan Indonesia, Indonesia
Endang Suryawati scopus  -  Lembaga Ilmu Pengetahuan Indonesia, Indonesia
Hilman F Pardede orcid scopus  -  Lembaga Ilmu Pengetahuan Indonesia, Indonesia
Vitria Puspitasari Rahadi  -  Pusat Penelitian Teh dan Kina, Indonesia
Received: 2 Jun 2020; Revised: 16 Sep 2020; Accepted: 13 Oct 2020; Published: 31 Oct 2020; Available online: 19 Oct 2020.
Fulltext Fulltext |
Open Access Copyright (c) 2020 Jurnal Teknologi dan Sistem Komputer under http://creativecommons.org/licenses/by-sa/4.0.

Citation Format:
Article Info
Section: Original Research Articles
Language: ID
Statistics: 196 74
Share:
Abstract
Tea clone of Gambung series is a superior variety of tea that has high productivity and quality. Smallholder farmers usually plant these clones in the same areas. However, each clone has different productivity or quality, so it is difficult to predict the production quality in the same area. To uniform the variety of clones in an area, smallholder farmers still need experts to identify each plant because one and other clones share the same visual characteristics. We propose a tea clone identification system using deep CNN with skip connection methods, i.e., residual connections and densely connections, to tackle this problem. Our study shows that the proposed method is affected by the hyperparameter setting and the combining feature maps method. For the combining method, the concatenation method on a densely connected network shows better performance than the summation method on a residual connected network.
Keywords: Gambung tea clone; deep CNN; skip connection; densely connected networks; residual connected networks
  1. M. S. Haq and K. Karyudi, “Upaya peningkatan produksi teh (Camelia Sinensis (L.) O. Kuntze) melalui penerapan kultur teknis,” Warta PPTK, vol. 24, no. 1, pp. 71-84, 2013.
  2. M. S. Haq and A. I. Mastur, “The growth of seedlings generated from cleft grafting of several superior tea clones,” Journal of Industrial and Beverage Crops, vol. 5, no. 3, pp. 105-112, 2018. doi: 10.21082/jtidp.v5n3.2018.p105-112
  3. B. Sriyadi, “Penilaian hubungan genetika klon teh berdasarkan komponen senyawa kimia utama dan potensi hasil,” Jurnal Penelitian Teh dan Kina, vol. 18, no. 1, pp. 1-10, 2015.
  4. H. Mawarti and R. Ratnawati, “Penghambatan peningkatan kadar kolesterol pada diet tinggi lemak oleh epigallocatechin gallate (EGCG) teh hijau klon Gmb4,” Prosiding Seminas Competitive Advantage, vol. 1, no. 2, pp. 1-5, 2012.
  5. Pusat Penelitian Teh dan Kina, “Klon GMB 1-11,” [Online]. Available: https://www.gamboeng.com/pages/detail/2015/59/146. [Accessed: April. 8, 2020].
  6. A. R. Pathak, M. Pandey, and S. Rautaray, “Application of deep learning for object detection,” Procedia Computer Science, vol. 132, pp. 1706-1717, 2018. doi: 10.1016/j.procs.2018.05.144
  7. Y. Sun, Y. Liu, G. Wang, and H. Zhang, "Deep learning for plant identification in natural environment," Computational Intelligence and Neuroscience, vol. 2017, 7361042, 2017. doi: 10.1155/2017/7361042
  8. C. Szegedy et al., “Going deeper with convolutions,” in IEEE Conference on Computer Vision and Pattern Recognition, Boston, USA, Jun. 2015, pp.1-9. doi: 10.1109/CVPR.2015.7298594
  9. A. Ramdan et al., “Deep CNN detection for tea clone identification,” Jurnal Elektronika dan Telekomunikasi, vol. 19, no. 2, pp. 45-50, 2019. doi: 10.14203/jet.v19.45-50
  10. N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov, “Dropout: a simple way to prevent neural networks from overfitting,” Journal of Machine Learning Research, vol. 15, no. 56, pp. 1929-1958, 2014.
  11. B. Wu, Z. Liu, Z. Yuan, G. Sun, and C. Wu, “Reducing overfitting in deep convolutional neural networks using redundancy regularizer,” in International Conference on Artificial Neural Networks, Alghero, Italy, Sept. 2017, pp. 49-55. doi: 10.1007/978-3-319-68612-7_6
  12. W Liu, Y. Zhang, X. Li, Z. Liu,B. Dai, T. Zhao, and L. Song, “Deep hyperspherical learning,” in 31st International Conference on Neural Information Processing Systems, California, USA, Dec. 2017, pp. 3953–3963.
  13. B. Barz and J. C. Denzler, "Deep learning on small datasets without pre-training using cosine loss," in IEEE Winter Conference on Applications of Computer Vision, Snowmass Village, USA, Mar. 2020, pp. 1360-1369. doi: 10.1109/WACV45572.2020.9093286
  14. X. Glorot and Y. Bengio, ”Understanding the difficulty of training deep feedforward neural networks,” in Thirteenth International Conference on Artificial Intelligence and Statistics, Sardinia, Italy, May 2010, pp. 249-256.
  15. K. He, X. Zhang, S. Ren, and J. Sun, "Deep residual learning for image recognition," in IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, USA, Jun. 2016, pp. 770-778. doi: 10.1109/CVPR.2016.90
  16. A. E. Orhan and X. Pitkow, “Skip connections eliminate singularities,” in International Conference on Learning Representations, Vancouver, Canada, May 2018, pp. 1-22.
  17. H. Wu, J. Zhang, and C. Zong, “An empirical exploration of skip connections for sequential tagging,” 2016, arXiv:1610.03167.
  18. T. Raiko, H. Valpola, and Y. Lecun, “Deep learning made easier by linear transformations in perceptrons,” in Fifteenth International Conference on Artificial Intelligence and Statistic, La Palma, Spain, Apr. 2012, pp.924-932.
  19. A. Graves, “Generating sequences with recurrent neural networks”, 2013, arXiv:1308.0850v5.
  20. G. Huang, Z. Liu, L. V. D. Maaten, and K. Q. Weinberger, "Densely connected convolutional networks," in IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, USA, Jul. 2017, pp. 2261-2269. doi: 10.1109/CVPR.2017.243
  21. W. Ma, Q. Yang, Y. Wu, W. Zhao, and X. Zhang, “Double-Branch multi-attention mechanism network for hyperspectral image classification,” Journal Remote Sensing, vol. 11, no. 11, 1307, 2019. doi: 10.3390/rs11111307
  22. K. He, X. Zhang, S. Ren, and J. Sun, “Identity mappings in deep residual networks,” in European Conference on Computer Vision, Amsterdam, Netherland, Oct. 2016, pp. 630-645.
  23. W. Rawat and Z. Wang, “Deep convolutional neural networks for image classification: a comprehensive review,” Journal Neural Computation, vol. 29, no. 9, pp. 2352-2449, 2017. doi: 10.1162/neco_a_00990
  24. W. Wartini, B. Minasny, M. Montazerolghaem, J. Padarian, R. Ferguson, S. Bailey, and A. B. Mcbratney, “Convolutional neural network for simultaneous prediction of several soil properties using visible/near-infrared, mid-infrared, and their combined spectra,” Geoderma, vol. 352, pp. 251-267, 2019. doi: 10.1016/j.geoderma.2019.06.016
  25. D. P. Kingma and J. L. Ba, “Adam: a method for stochastic optimization,” in International Conference on Learning Representations, Banff, Canada, Apr. 2014, pp. 1-15.
  26. P. Isola, J. Zhu, T. Zhou, and A. A. Efros, "Image-to-image translation with conditional adversarial networks,” in IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, USA, Jul. 2017, pp. 5967-5976. doi: 10.1109/CVPR.2017.632
  27. K. Xu et al., “Show, attend and tell: neural image caption generation with visual attention,” in 32nd International Conference on Machine Learning, Lille, France, Jul. 2015, pp. 2048-2057.
  28. N. S. Keskar and R. Socher, “Improving generalization performance by switching from adam to SGD,” 2017, arXiv:1712.07628.
  29. S. Merity, N. S. Keskar, and R. Socher, “Regularizing and optimizing LSTM language models,” 2017, arXiv:1708.02182.
  30. A. C. Wilson, R. Roelofs, M. Stern, N. Srebro, and B. Recht, “The marginal value of adaptive gradient methods in machine learning,” 2017, arXiv:1705.08292.
  31. J. Lee, T. Won, T. K. Lee, H. Lee, G. Gu, and K. Hong, “Compounding the performance improvements of assembled techniques in a convolutional neural network,” 2020, arXiv:2001.06268.
  32. Y. Yamada, M. Iwamura, T. Akiba, and K. Kise, "Shakedrop regularization for deep residual learning," IEEE Access, vol. 7, pp. 186126-186136, 2019. doi: 10.1109/ACCESS.2019.2960566
  33. J. Guo and S. Gould, "Depth dropout: efficient training of residual convolutional neural networks," in International Conference on Digital Image Computing: Techniques and Applications, Gold Coast, Australia, Dec. 2016, pp. 1-7. doi: 10.1109/DICTA.2016.7797032
  34. H. Wang, G. Wang, G. Li, and L. Lin, “CamDrop: a new explanation of dropout and a guided regularization method for deep neural networks,” in 28th ACM International Conference on Information and Knowledge Management, New York, USA, Nov. 2019, pp. 1141-1149. doi: 10.1145/3357384.3357999
  35. G. Ghiasi, T. Lin, and Q. V. Le, “DropBlock: a regularization method for convolutional networks,” 2018, arXiv:1810.12890.
  36. M. Mezzini, “Empirical study on label smoothing in neural networks,” in International Conferences in Central Europe on Computer Graphics, Visualization and Computer Vision, Prague, Czech Republic, Jun. 2018, pp. 200-205. doi: 10.24132/CSRN.2018.2802.25
  37. M. Goibert and E. Dohmatob, “Adversarial robustness via label-smoothing,” 2019, arXiv:1906.11567.
  38. C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. B. Wojna, "Rethinking the inception architecture for computer vision," in IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, USA, Jun. 2016, pp. 2818-2826. doi: 10.1109/CVPR.2016.308
  39. S. Yun, D. Han, S. J. Oh, S. Chun, J. Choe, and Y. Yoo, “CutMix: regularization strategy to train strong classifiers with localizable features,” in IEEE/CVF International Conference on Computer Vision, Seoul, South Korea, Nov. 2019, pp. 6022-6031. doi: 10.1109/ICCV.2019.00612

No citation recorded.