Abstract
Webex 連結如下: https://asmeet.webex.com/asmeet/j.php?MTID=md8a3a5c6b99080a4deafa6c4fbcc9295
時間:2023年12月14日星期四 上午 10:30 | 2 小時 | (UTC+08:00)台北
會議號: 2518 299 6126
密碼: JYiCkmMW642
Deep Neural Networks (DNNs) are vulnerable to adversarial examples, in which carefully crafted perturbations can easily fool DNNs into making wrong predictions. On the other hand, DNNs have poor generalization to domain shifts, as they suffer from performance degradation when encountering data from new visual distributions. We view these issues from the perspective of robustness. The lack of robustness limits DNNs from being deployed in broader real-world applications. This talk discusses the robustness of DNN-based computer vision approaches. The first part focuses on robustifying DNNs against adversarial examples. We dive into such adversarial robustness from four aspects: novel attacks, empirical defenses, generalizable defenses, and defenses specifically designed for less explored tasks. The second part focuses on improving the robustness against domain shifts via domain adaptation. We dive into two important domain adaptation settings: unsupervised domain adaptation and source-free domain adaptation. We also explore the intersection of adversarial robustness and domain adaptation fields. Our research aims at more robust, reliable and trustworthy computer vision.
時間:2023年12月14日星期四 上午 10:30 | 2 小時 | (UTC+08:00)台北
會議號: 2518 299 6126
密碼: JYiCkmMW642
Deep Neural Networks (DNNs) are vulnerable to adversarial examples, in which carefully crafted perturbations can easily fool DNNs into making wrong predictions. On the other hand, DNNs have poor generalization to domain shifts, as they suffer from performance degradation when encountering data from new visual distributions. We view these issues from the perspective of robustness. The lack of robustness limits DNNs from being deployed in broader real-world applications. This talk discusses the robustness of DNN-based computer vision approaches. The first part focuses on robustifying DNNs against adversarial examples. We dive into such adversarial robustness from four aspects: novel attacks, empirical defenses, generalizable defenses, and defenses specifically designed for less explored tasks. The second part focuses on improving the robustness against domain shifts via domain adaptation. We dive into two important domain adaptation settings: unsupervised domain adaptation and source-free domain adaptation. We also explore the intersection of adversarial robustness and domain adaptation fields. Our research aims at more robust, reliable and trustworthy computer vision.
Bio
Shao-Yuan Lo is a Research Scientist at Honda Research Institute USA, having recently obtained his Ph.D. in Electrical and Computer Engineering from Johns Hopkins University in 2023. He was a Research Intern at Amazon during Summer 2022 and 2021. Before this, he received his M.S. and B.S. degrees from National Chiao Tung University in 2019 and 2017, respectively. His recent research focuses on LLM-based visual understanding, model adaptability, and adversarial robustness. He has first-authored 10+ publications in refereed conferences and journals, such as IEEE/CVF CVPR, IEEE T-PAMI, IEEE T-IP, and IEEE/RSJ IROS. He won the Best Paper Award at ACM Multimedia Asia 2019 and the 2019 IPPR Best Master Thesis Award.