Perbandingan Kinerja Block Storage Ceph dan ZFS di Lingkungan Virtual

Performance Comparison of Ceph and ZFS Block Storage in Virtual Environment

*Faza Abdani Auni Robbi -  Department of Computer Engineering, Universitas Diponegoro, Indonesia
Agung Budi Prasetijo -  Department of Computer Engineering, Universitas Diponegoro, Indonesia
Eko Didik Widianto -  Department of Computer Engineering, Universitas Diponegoro, Indonesia
Received: 29 Nov 2018; Revised: 10 Jan 2019; Accepted: 30 Jan 2019; Published: 31 Jan 2019; Available online: 31 Mar 2019.
Open Access Copyright (c) 2019 Jurnal Teknologi dan Sistem Komputer
Citation Format:
Article Info
Section: Articles
Language: ID
Full Text:
Statistics: 258 160
Abstract
The growth of data requires better performance in the storage system. This study aims to analyze the comparison of block storage performance of Ceph and ZFS running in virtual environments. Tests were conducted to measure their performances, including IOPS, CPU usage, throughput, OLTP Database, replication time, and data integrity. Testing was done using 2 node servers with a standard configuration of the storage system. Server virtualization uses Proxmox on each node. ZFS has a higher performance of reading and writing operation than Ceph in IOPS, CPU usage, throughput, OLTP and data replication duration, except the CPU usage in writing operation. The test results are expected to be a reference in the selection of storage systems for data center applications.
Keywords
block storage performance; Ceph; Proxmox virtual environment; ZFS

Article Metrics:

  1. P. Kedia, R. Nagpal, and Tejinder Pal Singh, “A Survey on Virtualization Service Providers, Security Issues, Tools and Future Trends,” International Journal of Computer Applications. Noida, Uttar Pradesh, pp. 36–42, 2013.
  2. H. Singh and D. Seehan, “Current Trends in Cloud Computing A Survey of Cloud Computing Systems,” International Journal of Electronics and Computer Science Engineering, vol. 1, pp. 1215–1219, 2012.
  3. N. Vurukonda and B. T. Rao, “A Study on Data Storage Security Issues in Cloud Computing,” Procedia Computer Science, vol. 92, pp. 128-135 2016.
  4. K. Singh, Learning Ceph. Birmingham, UK: Packt Publishing Ltd, 2015.
  5. S. A. Weil, S. A. Brandt, E. L. Miller, and D. D. E. Long, “Ceph : A Scalable , High-Performance Distributed File System,” in 7th Symposium on Operating Systems Design and Implementation, Washington, USA, Nov. 2006, pp. 307-320.
  6. S. Meyer and J. P. Morrison, “Impact of Single Parameter Changes on Ceph Cloud Storage Performance,” Scalable Computing, vol. 17, no. 4, pp. 285–298, 2016.
  7. S. McKee, E. Kissel, B. Meekhof, M. Swany, C. Miller, and M. Gregorowicz, “OSIRIS: A Distributed Ceph Deployment using Software Defined Networking for Multi-Institutional Research,” IOP Journal of Physics: Conference Series, vol. 898, 2017.
  8. M. Vaidya and S. Deshpande, “Critical Study of Performance Parameters on Distributed File Systems Using MapReduce,” Procedia Computer Science, vol. 78, pp. 224-232, 2016.
  9. Solaris TM ZFS and Red Hat Enterprise Linux Ext3: File System Performance, Sun Microsystems, Santa Clara, CA, USA, 2007. [online]. Available: http://www.cs.utexas.edu/ users/dahlin/Classes/GradOS/papers/zfs_linux.pdf
  10. E. D. Widianto, A. B. Prasetijo, and A. Ghufroni, “On the Implementation of ZFS (Zettabyte File System) Storage System,” in 2016 3rd International Conference on Information Technology, Computer and Electrical Engineering, Semarang, Indonesia, Oct. 2016, pp. 408-413.
  11. J. J. Johari, M. F. Khalid, M. Nizam, M. Mydin, and N. Wijee, “Comparison of Various Virtual Machine Disk Images Performance on GlusterFS and Ceph Rados Block Devices,” in the 3rd International Conference on Informatics & Applications, Kuala Terengganu, Malaysia, 2014, pp. 1-7.
  12. V. Phromchana, N. Nupairoj, and K. Piromsopa, “Performance Evaluation of ZFS and LVM (with ext4) for Scalable Storage System,” in 2011 8th International Joint Conference on Computer Science and Software Engineering, Nakhon Pathom, Thailand, May 2011, pp. 250-253.