TY - GEN
T1 - ChameleonAPI
T2 - 18th USENIX Symposium on Operating Systems Design and Implementation, OSDI 2024
AU - Liu, Yuhan
AU - Wan, Chengcheng
AU - Du, Kuntai
AU - Hoffmann, Henry
AU - Jiang, Junchen
AU - Lu, Shan
AU - Maire, Michael
N1 - Publisher Copyright:
© OSDI 2024.All rights reserved.
PY - 2024
Y1 - 2024
N2 - ML APIs have greatly relieved application developers of the burden to design and train their own neural network models—classifying objects in an image can now be as simple as one line of Python code to call an API. However, these APIs offer the same pre-trained models regardless of how their output is used by different applications. This can be suboptimal as not all ML inference errors can cause application failures, and the distinction between inference errors that can or cannot cause failures varies greatly across applications. To tackle this problem, we first study 77 real-world applications, which collectively use six ML APIs from two providers, to reveal common patterns of how ML API output affects applications’ decision processes. Inspired by the findings, we propose ChameleonAPI, an optimization framework for ML APIs, which takes effect without changing the application source code. ChameleonAPI provides application developers with a parser that automatically analyzes the application to produce an abstract of its decision process, which is then used to devise an application-specific loss function that only penalizes API output errors critical to the application. ChameleonAPI uses the loss function to efficiently train a neural network model customized for each application and deploys it to serve API invocations from the respective application via existing interface. Compared to a baseline that selects the best-of-all commercial ML API, we show that ChameleonAPI reduces incorrect application decisions by 43%.
AB - ML APIs have greatly relieved application developers of the burden to design and train their own neural network models—classifying objects in an image can now be as simple as one line of Python code to call an API. However, these APIs offer the same pre-trained models regardless of how their output is used by different applications. This can be suboptimal as not all ML inference errors can cause application failures, and the distinction between inference errors that can or cannot cause failures varies greatly across applications. To tackle this problem, we first study 77 real-world applications, which collectively use six ML APIs from two providers, to reveal common patterns of how ML API output affects applications’ decision processes. Inspired by the findings, we propose ChameleonAPI, an optimization framework for ML APIs, which takes effect without changing the application source code. ChameleonAPI provides application developers with a parser that automatically analyzes the application to produce an abstract of its decision process, which is then used to devise an application-specific loss function that only penalizes API output errors critical to the application. ChameleonAPI uses the loss function to efficiently train a neural network model customized for each application and deploys it to serve API invocations from the respective application via existing interface. Compared to a baseline that selects the best-of-all commercial ML API, we show that ChameleonAPI reduces incorrect application decisions by 43%.
UR - https://www.scopus.com/pages/publications/85201308308
M3 - 会议稿件
AN - SCOPUS:85201308308
T3 - Proceedings of the 18th USENIX Symposium on Operating Systems Design and Implementation, OSDI 2024
SP - 365
EP - 386
BT - Proceedings of the 18th USENIX Symposium on Operating Systems Design and Implementation, OSDI 2024
PB - USENIX Association
Y2 - 10 July 2024 through 12 July 2024
ER -