- M. S. Haq and K. Karyudi, “Upaya peningkatan produksi teh (Camelia Sinensis (L.) O. Kuntze) melalui penerapan kultur teknis,” Warta PPTK, vol. 24, no. 1, pp. 71-84, 2013
- M. S. Haq and A. I. Mastur, “The growth of seedlings generated from cleft grafting of several superior tea clones,” Journal of Industrial and Beverage Crops, vol. 5, no. 3, pp. 105-112, 2018. doi: 10.21082/jtidp.v5n3.2018.p105-112
- B. Sriyadi, “Penilaian hubungan genetika klon teh berdasarkan komponen senyawa kimia utama dan potensi hasil,” Jurnal Penelitian Teh dan Kina, vol. 18, no. 1, pp. 1-10, 2015
- H. Mawarti and R. Ratnawati, “Penghambatan peningkatan kadar kolesterol pada diet tinggi lemak oleh epigallocatechin gallate (EGCG) teh hijau klon Gmb4,” Prosiding Seminas Competitive Advantage, vol. 1, no. 2, pp. 1-5, 2012
- Pusat Penelitian Teh dan Kina, “Klon GMB 1-11,” [Online]. Available: https://www.gamboeng.com/pages/detail/2015/59/146. [Accessed: April. 8, 2020]
- A. R. Pathak, M. Pandey, and S. Rautaray, “Application of deep learning for object detection,” Procedia Computer Science, vol. 132, pp. 1706-1717, 2018. doi: 10.1016/j.procs.2018.05.144
- Y. Sun, Y. Liu, G. Wang, and H. Zhang, "Deep learning for plant identification in natural environment," Computational Intelligence and Neuroscience, vol. 2017, 7361042, 2017. doi: 10.1155/2017/7361042
- C. Szegedy et al., “Going deeper with convolutions,” in IEEE Conference on Computer Vision and Pattern Recognition, Boston, USA, Jun. 2015, pp.1-9. doi: 10.1109/CVPR.2015.7298594
- A. Ramdan et al., “Deep CNN detection for tea clone identification,” Jurnal Elektronika dan Telekomunikasi, vol. 19, no. 2, pp. 45-50, 2019. doi: 10.14203/jet.v19.45-50
- N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov, “Dropout: a simple way to prevent neural networks from overfitting,” Journal of Machine Learning Research, vol. 15, no. 56, pp. 1929-1958, 2014
- B. Wu, Z. Liu, Z. Yuan, G. Sun, and C. Wu, “Reducing overfitting in deep convolutional neural networks using redundancy regularizer,” in International Conference on Artificial Neural Networks, Alghero, Italy, Sept. 2017, pp. 49-55. doi: 10.1007/978-3-319-68612-7_6
- W Liu, Y. Zhang, X. Li, Z. Liu,B. Dai, T. Zhao, and L. Song, “Deep hyperspherical learning,” in 31st International Conference on Neural Information Processing Systems, California, USA, Dec. 2017, pp. 3953–3963
- B. Barz and J. C. Denzler, "Deep learning on small datasets without pre-training using cosine loss," in IEEE Winter Conference on Applications of Computer Vision, Snowmass Village, USA, Mar. 2020, pp. 1360-1369. doi: 10.1109/WACV45572.2020.9093286
- X. Glorot and Y. Bengio, ”Understanding the difficulty of training deep feedforward neural networks,” in Thirteenth International Conference on Artificial Intelligence and Statistics, Sardinia, Italy, May 2010, pp. 249-256
- K. He, X. Zhang, S. Ren, and J. Sun, "Deep residual learning for image recognition," in IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, USA, Jun. 2016, pp. 770-778. doi: 10.1109/CVPR.2016.90
- A. E. Orhan and X. Pitkow, “Skip connections eliminate singularities,” in International Conference on Learning Representations, Vancouver, Canada, May 2018, pp. 1-22
- H. Wu, J. Zhang, and C. Zong, “An empirical exploration of skip connections for sequential tagging,” 2016, arXiv:1610.03167
- T. Raiko, H. Valpola, and Y. Lecun, “Deep learning made easier by linear transformations in perceptrons,” in Fifteenth International Conference on Artificial Intelligence and Statistic, La Palma, Spain, Apr. 2012, pp.924-932
- A. Graves, “Generating sequences with recurrent neural networks”, 2013, arXiv:1308.0850v5
- G. Huang, Z. Liu, L. V. D. Maaten, and K. Q. Weinberger, "Densely connected convolutional networks," in IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, USA, Jul. 2017, pp. 2261-2269. doi: 10.1109/CVPR.2017.243
- W. Ma, Q. Yang, Y. Wu, W. Zhao, and X. Zhang, “Double-Branch multi-attention mechanism network for hyperspectral image classification,” Journal Remote Sensing, vol. 11, no. 11, 1307, 2019. doi: 10.3390/rs11111307
- K. He, X. Zhang, S. Ren, and J. Sun, “Identity mappings in deep residual networks,” in European Conference on Computer Vision, Amsterdam, Netherland, Oct. 2016, pp. 630-645
- W. Rawat and Z. Wang, “Deep convolutional neural networks for image classification: a comprehensive review,” Journal Neural Computation, vol. 29, no. 9, pp. 2352-2449, 2017. doi: 10.1162/neco_a_00990
- W. Wartini, B. Minasny, M. Montazerolghaem, J. Padarian, R. Ferguson, S. Bailey, and A. B. Mcbratney, “Convolutional neural network for simultaneous prediction of several soil properties using visible/near-infrared, mid-infrared, and their combined spectra,” Geoderma, vol. 352, pp. 251-267, 2019. doi: 10.1016/j.geoderma.2019.06.016
- D. P. Kingma and J. L. Ba, “Adam: a method for stochastic optimization,” in International Conference on Learning Representations, Banff, Canada, Apr. 2014, pp. 1-15
- P. Isola, J. Zhu, T. Zhou, and A. A. Efros, "Image-to-image translation with conditional adversarial networks,” in IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, USA, Jul. 2017, pp. 5967-5976. doi: 10.1109/CVPR.2017.632
- K. Xu et al., “Show, attend and tell: neural image caption generation with visual attention,” in 32nd International Conference on Machine Learning, Lille, France, Jul. 2015, pp. 2048-2057
- N. S. Keskar and R. Socher, “Improving generalization performance by switching from adam to SGD,” 2017, arXiv:1712.07628
- S. Merity, N. S. Keskar, and R. Socher, “Regularizing and optimizing LSTM language models,” 2017, arXiv:1708.02182
- A. C. Wilson, R. Roelofs, M. Stern, N. Srebro, and B. Recht, “The marginal value of adaptive gradient methods in machine learning,” 2017, arXiv:1705.08292
- J. Lee, T. Won, T. K. Lee, H. Lee, G. Gu, and K. Hong, “Compounding the performance improvements of assembled techniques in a convolutional neural network,” 2020, arXiv:2001.06268
- Y. Yamada, M. Iwamura, T. Akiba, and K. Kise, "Shakedrop regularization for deep residual learning," IEEE Access, vol. 7, pp. 186126-186136, 2019. doi: 10.1109/ACCESS.2019.2960566
- J. Guo and S. Gould, "Depth dropout: efficient training of residual convolutional neural networks," in International Conference on Digital Image Computing: Techniques and Applications, Gold Coast, Australia, Dec. 2016, pp. 1-7. doi: 10.1109/DICTA.2016.7797032
- H. Wang, G. Wang, G. Li, and L. Lin, “CamDrop: a new explanation of dropout and a guided regularization method for deep neural networks,” in 28th ACM International Conference on Information and Knowledge Management, New York, USA, Nov. 2019, pp. 1141-1149. doi: 10.1145/3357384.3357999
- G. Ghiasi, T. Lin, and Q. V. Le, “DropBlock: a regularization method for convolutional networks,” 2018, arXiv:1810.12890
- M. Mezzini, “Empirical study on label smoothing in neural networks,” in International Conferences in Central Europe on Computer Graphics, Visualization and Computer Vision, Prague, Czech Republic, Jun. 2018, pp. 200-205. doi: 10.24132/CSRN.2018.2802.25
- M. Goibert and E. Dohmatob, “Adversarial robustness via label-smoothing,” 2019, arXiv:1906.11567
- C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. B. Wojna, "Rethinking the inception architecture for computer vision," in IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, USA, Jun. 2016, pp. 2818-2826. doi: 10.1109/CVPR.2016.308
- S. Yun, D. Han, S. J. Oh, S. Chun, J. Choe, and Y. Yoo, “CutMix: regularization strategy to train strong classifiers with localizable features,” in IEEE/CVF International Conference on Computer Vision, Seoul, South Korea, Nov. 2019, pp. 6022-6031. doi: 10.1109/ICCV.2019.00612
Last update: 2021-03-07 13:53:46
No citation recorded.
Last update: 2021-03-07 13:53:47
No citation recorded.
Copyright (c) 2020 Jurnal Teknologi dan Sistem Komputer
Starting in 2021, the author(s) whose article is published in the JTSiskom journal attain the copyright for their article. By submitting the manuscript to JTSiskom, the author(s) agree with this policy. No special document approval is required.
The author(s) guarantee that their article is original, written by the mentioned author(s), has never been published before, does not contain statements that violate the law, does not violate the rights of others, is subject to copyright that is held exclusively by the author(s), and is free from the rights of third parties, and that the necessary written permission to quote from other sources has been obtained by the author(s).
The author(s) retain all rights to the published work, such as (but not limited to) the following rights:
- Copyright and other proprietary rights related to articles, such as patents,
- The right to use the substance of the article in its own future works, including lectures and books,
- The right to reproduce articles for its own purposes,
- The right to archive articles yourself (please read our deposit policy), and
- The right to enter into separate additional contractual arrangements for the non-exclusive distribution of published versions of articles (for example, posting them to institutional repositories or publishing them in a book), with acknowledgment of its initial publication in this journal (Journal of Technology and Computer Systems).
If the article was prepared jointly by more than one author, each author submitting the manuscript warrants that they have been given permission by all co-authors to agree to copyright and license notices (agreements) on their behalf, and agree to notify the co-authors of the terms of this policy. JTSiskom will not be held responsible for anything that may arise because of the writer's internal dispute. JTSiskom will only communicate with correspondence authors.
Authors should also understand that once published, their articles (and any additional files, including data sets, and analysis/computation data) will become publicly available. The license of published articles (and additional data) will be governed by the Creative Commons Attribution license as currently featured on the Creative Commons Attribution-ShareAlike 4.0 International License. JTSiskom allows users to copy, distribute, display and perform work under license. Users need to attribute the author(s) and JTSiskom to distribute works in journals and other publication media. Unless otherwise stated, the author(s) is a public entity as soon as the article is published.