Zongbo Han

Assistant Professor, Beijing University of Posts and Telecommunications

Assistant Professor at Beijing University of Posts and Telecommunications. He has published dozens of papers in top international conferences and journals such as ICML, NeurIPS, ICLR, CVPR, ICCV, ECCV, and IEEE Transactions. He has been invited to review for leading journals and conferences including IEEE TPAMI and NeurIPS. His papers have been cited more than 2,000 times on Google Scholar and have received positive recognition from internationally renowned scholars, including members of the U.S. National Academies and the European Academy of Sciences. He has received honors such as the Tencent Rhino-Bird Elite Researcher Award, the WAIC (World Artificial Intelligence Conference) Best Youth Paper Nomination Award, and the First Prize of Tianjin Natural Science Award. He has also led a fundamental research project for doctoral students funded by the National Natural Science Foundation of China.

Topic

Uncertainty Modeling: Towards Reliable Artificial Intelligence

Reliability has become a core bottleneck limiting the large-scale deployment and application of artificial intelligence. Given its profound impact on global interests and national security, enhancing AI reliability is not only a key frontier research direction but has also been elevated to a strategic priority in national development plans by multiple countries. Among the many factors affecting reliability, the ability to handle uncertainty is critical. Authorities, including U.S. National Academy members and Nobel laureates in Economics, have emphasized that this is a central challenge for AI development. Recent studies indicate that existing AI models exhibit significant shortcomings in accurately representing real-world uncertainty, posing serious challenges for safe and effective application in complex real-world scenarios. This talk will present methods to enhance AI reliability by quantifying and reducing uncertainty. Outline: Uncertainty is a key factor limiting the reliability of deep learning models. Accurately quantifying uncertainty allows a model to assess the confidence of its own predictions, enabling a self-awareness capability of “knowing what it knows and recognizing what it does not know.” This capability is crucial for downstream decision-making systems, as it helps anticipate and detect potential risk scenarios, thereby improving overall model performance and safety. This talk focuses on uncertainty quantification and reduction in deep learning. First, we will explore calibration methods for predictive uncertainty, aiming to align confidence estimates more closely with true probability distributions. Next, we will systematically decompose uncertainty into three core sources: aleatoric uncertainty arising from inherent data noise, distributional uncertainty caused by out-of-distribution samples, and epistemic uncertainty stemming from model limitations. Finally, the talk will present strategies to mitigate each of these three types of uncertainty, with the goal of significantly enhancing the reliability of AI models.

© boolan.com 博览 版权所有

沪ICP备15014563号-6

沪公网安备31011502003949号