Bang Xiao 肖 棒

Hi👋 I am Xiao Bang, an undergrad student major in Computer Science in Shanghai Jiao Tong University, supervised by Cewu Lu and Yonglu Li. I am also selected as a member of Zhiyuan Honors Program.

Currently, I am working on Human Object Interaction (HOI) and Human Motion Generation. Besides, I also have a strong interest in representation learning, self-supervised learning, and 4D.

My long-term goal is to build intelligent systems that can perceive, reason about, and interact with the outside world.


Education
  • Shanghai Jiao Tong University
    Shanghai Jiao Tong University
    B.S. in Computer Science, Zhiyuan Honors Program
    Sep. 2023 - now
  • No.1 Middle School Affiliated to Central China Normal University
    No.1 Middle School Affiliated to Central China Normal University
    High School
    Sep. 2020 - Jun. 2023
Experience
  • SJTU MVIG Lab <br/> Superviser: Cewu Lu and Yonglu Li
    SJTU MVIG Lab
    Superviser: Cewu Lu and Yonglu Li
    Research Intern
    Dec. 2024 - Now
  • SJTU EPIC Lab <br/> Superviser: Linfeng Zhang
    SJTU EPIC Lab
    Superviser: Linfeng Zhang
    Research Intern
    Aug. 2024 - Dec. 2024
Publications (view all )
Token Pruning for Caching Better: 9$\times$ Acceleration on Stable Diffusion without Training
Token Pruning for Caching Better: 9$\times$ Acceleration on Stable Diffusion without Training

Evelyn Zhang*, Bang Xiao*, Fufu Yu, Jiayi Tang, Chang Zou, Ke Yan, Shouhong Ding, Qianli Ma, Fei Ren, Linfeng Zhang# (* equal contribution, # corresponding author)

Submitted to ICCV 2025

Based on token prune and layer cache technology, we present a new Stable Diffusion acceleration method named dynamics-aware token pruning (DaTo). In the COCO-30k, we observed a 7$\times$ acceleration coupled with a notable FID reduction of 2.17.

Token Pruning for Caching Better: 9$\times$ Acceleration on Stable Diffusion without Training

Evelyn Zhang*, Bang Xiao*, Fufu Yu, Jiayi Tang, Chang Zou, Ke Yan, Shouhong Ding, Qianli Ma, Fei Ren, Linfeng Zhang# (* equal contribution, # corresponding author)

Submitted to ICCV 2025

Based on token prune and layer cache technology, we present a new Stable Diffusion acceleration method named dynamics-aware token pruning (DaTo). In the COCO-30k, we observed a 7$\times$ acceleration coupled with a notable FID reduction of 2.17.

All publications