TY - GEN
T1 - Rapid Diffusion
T2 - 61st Annual Meeting of the Association for Computational Linguistics, ACL 2023
AU - Liu, Bingyan
AU - Lin, Weifeng
AU - Duan, Zhongjie
AU - Wang, Chengyu
AU - Wu, Ziheng
AU - Zhang, Zipeng
AU - Jia, Kui
AU - Jin, Lianwen
AU - Chen, Cen
AU - Huang, Jun
N1 - Publisher Copyright:
© ACL 2023.All rights reserved.
PY - 2023
Y1 - 2023
N2 - Text-to-Image Synthesis (TIS) aims to generate images based on textual inputs. Recently, several large pre-trained diffusion models have been released to create high-quality images with pre-trained text encoders and diffusion-based image synthesizers. However, popular diffusion-based models from the open-source community cannot support industrial domain-specifc applications due to the lack of entity knowledge and low inference speed. In this paper, we propose Rapid Diffusion, a novel framework for training and deploying super-resolution, text-to-image latent diffusion models with rich entity knowledge injected and optimized networks. Furthermore, we employ BladeDISC, an end-to-end Artifcial Intelligence (AI) compiler, and FlashAttention techniques to optimize computational graphs of the generated models for online deployment. Experiments verify the effectiveness of our approach in terms of image quality and inference speed. In addition, we present industrial use cases and integrate Rapid Diffusion to an AI platform to show its practical values.
AB - Text-to-Image Synthesis (TIS) aims to generate images based on textual inputs. Recently, several large pre-trained diffusion models have been released to create high-quality images with pre-trained text encoders and diffusion-based image synthesizers. However, popular diffusion-based models from the open-source community cannot support industrial domain-specifc applications due to the lack of entity knowledge and low inference speed. In this paper, we propose Rapid Diffusion, a novel framework for training and deploying super-resolution, text-to-image latent diffusion models with rich entity knowledge injected and optimized networks. Furthermore, we employ BladeDISC, an end-to-end Artifcial Intelligence (AI) compiler, and FlashAttention techniques to optimize computational graphs of the generated models for online deployment. Experiments verify the effectiveness of our approach in terms of image quality and inference speed. In addition, we present industrial use cases and integrate Rapid Diffusion to an AI platform to show its practical values.
UR - https://www.scopus.com/pages/publications/85172170810
U2 - 10.18653/v1/2023.acl-industry.28
DO - 10.18653/v1/2023.acl-industry.28
M3 - 会议稿件
AN - SCOPUS:85172170810
T3 - Proceedings of the Annual Meeting of the Association for Computational Linguistics
SP - 295
EP - 304
BT - Industry Track
PB - Association for Computational Linguistics (ACL)
Y2 - 9 July 2023 through 14 July 2023
ER -