Invited Talks

    • 09 / 2024: Practical Backdoor Attacks in Computer Vision, Bosch AI Center
    • 09 / 2024: Responsible Text-to-Image Generation, Karlsruhe Institute of Technology - KIT
    • 09 / 2024: Responsible Generative AI: Ensuring Safety in Text-to-Image and Image-to-Text Generation, Munich Center for Machine Learning - MCML
    • 09 / 2024: Toward the Risks Brought by Visual Input into Multimodal LLMs, MaiNLP at LMU Munich [Link]
    • 09 / 2024: Responsible Generative AI, Inria - DFKI European Summer School [Link]
    • 04 / 2024: Responsible Generative AI, China Society of Image and Graphics (CSIG)
    • 12 / 2023: Reliable Visual Perception, Tsinghua University
    • 11 / 2023: SegPGD: An Effective and Efficient Adversarial Attack for Evaluating and Boosting Segmentation Robustness, University of California, Santa Barbara
    • 10 / 2023: Reliable Visual Perception, University of Toronto
    • 07 / 2023: Reliable Visual Perception, CFAR Rising Star Lecture Series [Link]
    • 05 / 2023: The Safety of Image Semantic Segmentation Systems, Turing AI Fellowship Event
    • 03 / 2023: On Safety of Semantic Segmentation, Five AI
    • 02 / 2023: Robustness of vision system, Sheffield Lab, Noah's Ark Lab
    • 12 / 2022: Robustness of sparse predictions and dense predictions of DNNs, University of Surrey
    • 06 / 2022: Explainability of Visual Recognition Models, DataFun
    • 02 / 2022: Explainability and Robustness of Deep Visual Classification Models, University of Tübingen
    • 11 / 2021: On the Robustness of Vision Transformer to Patch Perturbation, Google Brain
    • 09 / 2021: Adversarial Training on Semantic Segmentation, University of Oxford
    • 01 / 2021: Knowledge Distillation meets NAS and Self-supervised Learning, SmartThing [Link]
    • 08 / 2020: Explaining Individual CNN-based Image Classifications, Chinese University of Hong Kong