TY - GEN
T1 - I/O scheduling with mapping cache awareness for flash based storage systems
AU - Ji, Cheng
AU - Wu, Chao
AU - Chang, Li Pin
AU - Shi, Liang
AU - Xue, Chun Jason
N1 - Publisher Copyright:
© 2016 ACM.
PY - 2016/10/1
Y1 - 2016/10/1
N2 - NAND ash memory has been the default storage component in mobile systems. One of the key technologies for ash management is the address mapping scheme between logical addresses and physical addresses, which deals with the inability of in-place-updating in ash memory. Demand-based page-level mapping cache is often applied to match the cache size constraint and performance requirement of mobile storage systems. However, recent studies showed that the management overhead of mapping cache schemes is sensitive to the host I/O patterns, especially when the mapping cache is small. This paper presents a novel I/O scheduling scheme, called MAP, to alleviate this problem. The proposed scheduling approach reorders I/O requests for performance improvement from two angles: Prioritizing the requests that will hit in the mapping cache, and grouping requests with related logical addresses into large batches. Experimental results show that MAP improved upon traditional I/O schedulers by 30% and 8% in terms of read and write latencies, respectively.
AB - NAND ash memory has been the default storage component in mobile systems. One of the key technologies for ash management is the address mapping scheme between logical addresses and physical addresses, which deals with the inability of in-place-updating in ash memory. Demand-based page-level mapping cache is often applied to match the cache size constraint and performance requirement of mobile storage systems. However, recent studies showed that the management overhead of mapping cache schemes is sensitive to the host I/O patterns, especially when the mapping cache is small. This paper presents a novel I/O scheduling scheme, called MAP, to alleviate this problem. The proposed scheduling approach reorders I/O requests for performance improvement from two angles: Prioritizing the requests that will hit in the mapping cache, and grouping requests with related logical addresses into large batches. Experimental results show that MAP improved upon traditional I/O schedulers by 30% and 8% in terms of read and write latencies, respectively.
UR - https://www.scopus.com/pages/publications/84995538915
U2 - 10.1145/2968478.2968503
DO - 10.1145/2968478.2968503
M3 - 会议稿件
AN - SCOPUS:84995538915
T3 - Proceedings of the 13th International Conference on Embedded Software, EMSOFT 2016
BT - Proceedings of the 13th International Conference on Embedded Software, EMSOFT 2016
PB - Association for Computing Machinery, Inc
T2 - 13th International Conference on Embedded Software, EMSOFT 2016
Y2 - 1 October 2016 through 7 October 2016
ER -