skip to main content

Real-time currency recognition on video using AKAZE algorithm

Department of Software Engineering, Faculty of Informatics, Institut Teknologi Telkom Purwokerto. Jl. D. I. Panjaitan No. 128, Purwokerto, Jawa Tengah 53147, Indonesia

Received: 3 Nov 2020; Revised: 7 Jul 2021; Accepted: 18 Jul 2021; Published: 31 Oct 2021.
Open Access Copyright (c) 2021 The Authors. Published by Department of Computer Engineering, Universitas Diponegoro
Creative Commons License This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.

Citation Format:
Abstract
Currency recognition is one of the essential things since everyone in any country must know money. Therefore, computer vision has been developed to recognize currency. One of the currency recognition uses the SIFT algorithm. The recognition results are very accurate, but the processing takes a considerable amount of time, making it impossible to run for real-time data such as video. AKAZE algorithm has been developed for real-time data processing because of its fast computation time to process video data frames. This study proposes the faster real-time currency recognition system on video using the AKAZE algorithm. The purpose of this study is to compare the SIFT and AKAZE algorithms related to a real-time video data processing to determine the value of F1 and its speed. Based on the experimental results, the AKAZE algorithm is resulting F1 value of 0.97, and the processing speed on each video frame is 0.251 seconds. Then at the same video resolution, the SIFT algorithm results in an F1 value of 0.65 and a speed of 0.305 seconds to process one frame. These results show that the AKAZE algorithm is faster and more accurate in processing video data.
Keywords: currency recognition; SIFT algorithm; AKAZE algorithm; real-time video data
Funding: Institut Teknologi Telkom Purwokerto

Article Metrics:

  1. G. Farooque, A. B. Sargano, I. Shafi, and W. Ali, “Coin recognition with reduced feature set sift algorithm using neural network,” in the 14th International Conference on Frontiers of Information Technology, Islamabad, Pakistan, Dec. 2016, pp. 93–98. doi: 10.1109/FIT.2016.025
  2. B. Jiang, X. Li, L. Yin, W. Yue, and S. Wang, “Object recognition in remote sensing images using combined deep features,” in the 3rd Information Technology, Networking, Electronic and Automation Control Conference, Chengdu, China, Mar. 2019, pp. 606–610. doi: 10.1109/ITNEC.2019.8729392
  3. Y. Zhang and J. Liang, “A vision based method for object recognition,” in the 3rd International Conference on Information Science and Control Engineering, Beijing, China, Jul. 2016, pp. 139–142. doi: 10.1109/ICISCE.2016.40
  4. J. Xu, G. Yang, Y. Liu, and J. Zhong, “Coin recognition method based on SIFT algorithm,” in the 4th International Conference on Information Science and Control Engineering, Changsha, China, Jul. 2017, pp. 229–233. doi: 10.1109/ICISCE.2017.57
  5. A. Kuznetsov and A. Savchenko, “Sport teams logo detection based on deep local features,” in International Multi-Conference on Engineering, Computer and Information Sciences, Novosibirsk, Russia, Oct. 2019, pp. 548–552. doi: 10.1109/SIBIRCON48586.2019.8958301
  6. N. Dalal and B. Triggs, “Histograms of oriented gradients for human detection,” Lecture Notes in Computer Science, vol. 9284, pp. 498-515, 2015. doi: 10.1007/978-3-319-23528-8_31
  7. D. G. Lowe, “Distinctive image features from scale-invariant keypoints,” International Journal of Computer Vision, vol. 60, no. 2, pp. 91–110, 2004. doi: 10.1023/B:VISI.0000029664.99615.94
  8. H. Bay, A. Ess, T. Tuytelaars, and L. Vangool, “Speeded-Up Robust Features (SURF),” Computer Vision and Image Understanding, vol. 110, no. 3, pp. 346–359, 2006. doi: 10.1016/j.cviu.2007.09.014
  9. E. Rublee, V. Rabaud, K. Konolige, and G. Bradski, “ORB: An efficient alternative to SIFT or SURF,” in the IEEE International Conference on Computer Vision, Barcelona, Spain, Nov. 2011, pp. 2564–2571. doi: 10.1109/ICCV.2011.6126544
  10. P. F. Alcantarilla, J. Nuevo, and A. Bartoli, “Fast explicit diffusion for accelerated features in nonlinear scale spaces,” in the British Machine Vision Conference, Bristol, UK, Sep. 2013, pp. 1-9. doi: 10.5244/C.27.13
  11. M. L. Meharu and H. S. Worku, “Real-Time Ethiopian currency recognition for visually disabled peoples using convolutional neural network,” Research Square, preprint, pp. 1-24, 2020. doi: 10.21203/rs.3.rs-125061/v1
  12. Q. Zhang, W. Q. Yan, and M. Kankanhalli, “Overview of currency recognition using deep learning,” Journal of Banking and Financial Technology, vol. 3, no. 1, pp. 59–69, 2019. doi: 10.1007/s42786-018-00007-1
  13. D. Henry, Y. Yao, R. Fulton, and A. Kyme, “An optimized feature detector for markerless motion tracking in motion-compensated neuroimaging,” in the IEEE Nuclear Science Symposium and Medical Imaging Conference, Atlanta, USA, Oct. 2017, pp. 1–4. doi: 10.1109/NSSMIC.2017.8532865
  14. P. Soleimani, D. W. Capson, and K. F. Li, “Real-time FPGA-based implementation of the AKAZE algorithm with nonlinear scale space generation using image partitioning,” Journal of Real-Time Image Processing, vol. 18, pp. 2123-2134, 2021. doi: 10.1007/s11554-021-01089-9
  15. H. Seong, H. Choi, H. Son, and C. Kim, “Image-based 3D building reconstruction using A-KAZE feature extraction algorithm,” in the International Symposium on Automation and Robotics in Construction, Berlin, Germany, Jul. 2018. doi: 10.22260/isarc2018/0127
  16. L. Kalms, K. Mohamed, and D. Göhringer, “Accelerated embedded AKAZE feature detection algorithm on FPGA,” in ACM International Conference Proceeding Series, Bochum, Germany, Jun. 2017, pp. 3–8. doi: 10.1145/3120895.3120898
  17. B. Soni, V. Anji Reddy, N. B. Muppalaneni, and C. Lalrempuii, “Image forgery detection using AKAZE keypoint feature extraction and trie matching,” International Journal of Innovative Technology and Exploring Engineering, vol. 9, no. 1, pp. 2208–2213, 2019. doi: 10.35940/ijitee.A4784.119119
  18. D. K. Iakovidis, E. Spyrou, and D. Diamantis, “Efficient homography-based video visualization for wireless capsule endoscopy,” in the IEEE International Conference on BioInformatics and BioEngineering, Chania, Greece , Nov. 2013, pp. 1–4. doi: 10.1109/BIBE.2013.6701598
  19. M. Muja and D. Lowe, “FLANN - Fast Library for Approximate Nearest Neighbors: User manual,” Univ. of British Columbia, Canada, pp. 1–21, 2009
  20. J. Jo, J. Seo, and J. D. Fekete, “PANENE: A Progressive Algorithm for Indexing and Querying Approximate k-Nearest Neighbors,” IEEE Transactions on Visualization and Computer Graphics, vol. 26, no. 2, pp. 1347–1360, 2020. doi: 10.1109/TVCG.2018.2869149
  21. I. W. A. Suryawibawa, I. K. G. D. Putra, and N. K. A. Wirdiani, “Herbs recognition based on Android using OpenCV,” International Journal of Image, Graphics and Signal Processing, vol. 7, no. 2, pp. 1–7, 2015. doi: 10.5815/ijigsp.2015.02.01
  22. M. Naharul, H. Najihul, and S. Adinugroho, “Implementasi metode template matching untuk mengenali nilai angka pada citra uang kertas yang dipindai,” Jurnal Pengembangan Teknologi Informasi dan Ilmu Komputer, vol. 3, no. 2, pp. 1550–1556, 2019
  23. N. P. Lestari, “Uji recall and precision sistem temu kembali informasi OPAC Perpustakaan ITS Surabaya,” B.Eng thesis, Universitas Airlangga, Surabaya, Indonesia, 2016
  24. F. D. Adhinata, M. Ikhsan, and W. Wahyono, “People counter on CCTV video using histogram of oriented gradient and Kalman filter methods,” Jurnal Teknologi dan Sistem Komputer, vol. 8, no. 3, pp. 222–227, 2020. doi: 10.14710/jtsiskom.2020.13660
  25. F. D. Adhinata, A. Harjoko, and Wahyono, “Object searching on video using orb descriptor and support vector machine,” in Advances in Computational Collective Intelligence, Da Nang, Vietnam, Nov. 2020, pp. 239–251. doi: 10.1007/978-3-030-63119-2_20

Last update:

No citation recorded.

Last update: 2024-11-20 05:33:35

No citation recorded.