TY - GEN
T1 - Benchmarking Distributed Transactional Database Systems
AU - He, Hailin
AU - Weng, Siyang
AU - Zeng, Lingyang
AU - Zhang, Huidong
AU - Zhang, Rong
AU - Cai, Peng
AU - Zhou, Xuan
AU - Xu, Quanqing
N1 - Publisher Copyright:
© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2025.
PY - 2025
Y1 - 2025
N2 - With the attractive characteristics of scalability, strong consistency, and high availability, distributed databases have attracted much attention. Moreover, application-oriented database development promotes the fast evolution and development of various distributed databases. Then there is a growing need for more precise and comprehensive evaluations to facilitate the selection and deployment of distributed databases. Though there have already been a bunch of benchmarks, we observe that existing benchmarks often fall short in addressing the technical challenges posed by distributed systems and particularly in providing quantitative control to workload for a fair comparison. To address these gaps, we identify five critical evaluation scenarios, focusing on distributed transaction processing, dynamic data scheduling, distributed lock management, distributed clock management, and fault recovery. Based on this analysis, we design and implement Sherry, an evaluation benchmark tool specifically tailored to these scenarios. Through extensive experiments conducted on OceanBase, we expose Sherry’s effectiveness in assessing the key design and optimization of OceanBase which are the inherent challenges for distributed transaction processing. Our findings validate Sherry’s feasibility as a robust benchmarking tool in this domain.
AB - With the attractive characteristics of scalability, strong consistency, and high availability, distributed databases have attracted much attention. Moreover, application-oriented database development promotes the fast evolution and development of various distributed databases. Then there is a growing need for more precise and comprehensive evaluations to facilitate the selection and deployment of distributed databases. Though there have already been a bunch of benchmarks, we observe that existing benchmarks often fall short in addressing the technical challenges posed by distributed systems and particularly in providing quantitative control to workload for a fair comparison. To address these gaps, we identify five critical evaluation scenarios, focusing on distributed transaction processing, dynamic data scheduling, distributed lock management, distributed clock management, and fault recovery. Based on this analysis, we design and implement Sherry, an evaluation benchmark tool specifically tailored to these scenarios. Through extensive experiments conducted on OceanBase, we expose Sherry’s effectiveness in assessing the key design and optimization of OceanBase which are the inherent challenges for distributed transaction processing. Our findings validate Sherry’s feasibility as a robust benchmarking tool in this domain.
KW - Benchmarking
KW - Concurrency Control
KW - Distributed Database Systems
UR - https://www.scopus.com/pages/publications/105004252583
U2 - 10.1007/978-981-96-5032-3_3
DO - 10.1007/978-981-96-5032-3_3
M3 - 会议稿件
AN - SCOPUS:105004252583
SN - 9789819650316
T3 - Lecture Notes in Computer Science
SP - 37
EP - 53
BT - Benchmarking, Measuring, and Optimizing - 16th BenchCouncil International Symposium, Bench 2024, Revised Selected Papers
A2 - Lin, Weiwei
A2 - Jia, Zhen
A2 - Hunold, Sascha
A2 - Kang, Guoxin
PB - Springer Science and Business Media Deutschland GmbH
T2 - 16th BenchCouncil International Symposium on Benchmarking, Measuring, and Optimizing, Bench 2024
Y2 - 4 December 2024 through 6 December 2024
ER -