TY - GEN
T1 - Constructing Benchmarks for Open Source Ecosystems
T2 - 4th BenchCouncil International Symposium on Intelligent Computers, Algorithms, and Applications, IC 2024
AU - Zhang, Zhen
AU - Wang, Wei
AU - You, Lan
AU - Han, Fanyu
AU - Cui, Haibo
AU - Jin, Hong
N1 - Publisher Copyright:
© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2025.
PY - 2025
Y1 - 2025
N2 - As open source technology increasingly influences the advancement of worldwide innovations and the digital marketplace, the heterogeneity and intricacy of its ecosystem have also escalated, posing new challenges for benchmark construction tailored to open source environments. Traditional benchmarking approaches are often confined to specific static environments, assessing the effectiveness of methodologies through measured metrics. However, in open-source contexts characterized by high degrees of freedom, strong liquidity, and broad interaction, the community dynamically generates new real-world needs daily. Consequently, benchmarks developed in static environments fail to effectively measure the value of methodologies within open source scenarios. In response, this paper advocates for a method of constructing open source benchmarks driven by the actual needs of stakeholders within the open source ecosystem, who frequently articulate their needs through the challenges they encounter. By identifying and understanding these needs, open source benchmarks can be designed to closely align with the actual demands of stakeholders, thereby enhancing their relevance and effectiveness. This approach not only facilitates the direct application of benchmark results in real-world scenarios for immediate feedback and verification, but also enhances the authenticity and credibility of the evaluation outcomes. We propose a process for constructing open source benchmark: identifying the real needs of stakeholders within the open source ecosystem, translating these needs into specific tasks, creating benchmarks around these tasks, and applying the benchmark results to practical scenarios. This approach allows open-source benchmarks to continuously adjust and improve based on real feedback, supporting the sustainable development of the open-source ecosystem. This approach fosters a beneficial cycle of synergy between open source benchmark and stakeholder needs in the ecosystem, ensuring that benchmark originates from and is tailored to meet real-world needs.
AB - As open source technology increasingly influences the advancement of worldwide innovations and the digital marketplace, the heterogeneity and intricacy of its ecosystem have also escalated, posing new challenges for benchmark construction tailored to open source environments. Traditional benchmarking approaches are often confined to specific static environments, assessing the effectiveness of methodologies through measured metrics. However, in open-source contexts characterized by high degrees of freedom, strong liquidity, and broad interaction, the community dynamically generates new real-world needs daily. Consequently, benchmarks developed in static environments fail to effectively measure the value of methodologies within open source scenarios. In response, this paper advocates for a method of constructing open source benchmarks driven by the actual needs of stakeholders within the open source ecosystem, who frequently articulate their needs through the challenges they encounter. By identifying and understanding these needs, open source benchmarks can be designed to closely align with the actual demands of stakeholders, thereby enhancing their relevance and effectiveness. This approach not only facilitates the direct application of benchmark results in real-world scenarios for immediate feedback and verification, but also enhances the authenticity and credibility of the evaluation outcomes. We propose a process for constructing open source benchmark: identifying the real needs of stakeholders within the open source ecosystem, translating these needs into specific tasks, creating benchmarks around these tasks, and applying the benchmark results to practical scenarios. This approach allows open-source benchmarks to continuously adjust and improve based on real feedback, supporting the sustainable development of the open-source ecosystem. This approach fosters a beneficial cycle of synergy between open source benchmark and stakeholder needs in the ecosystem, ensuring that benchmark originates from and is tailored to meet real-world needs.
KW - Benchmark
KW - Needs-Driven
KW - Open Source Ecosystem
KW - Sustainable Development
UR - https://www.scopus.com/pages/publications/105006896475
U2 - 10.1007/978-981-96-6310-1_13
DO - 10.1007/978-981-96-6310-1_13
M3 - 会议稿件
AN - SCOPUS:105006896475
SN - 9789819663095
T3 - Communications in Computer and Information Science
SP - 184
EP - 198
BT - Intelligent Computers, Algorithms, and Applications - 4th BenchCouncil International Symposium, IC 2024, Revised Selected Papers
A2 - Luo, Chunjie
A2 - Li, Weiping
PB - Springer Science and Business Media Deutschland GmbH
Y2 - 4 December 2024 through 6 December 2024
ER -