
Yan Pang, Ph.D., serves as an Associate Professor at Guangzhou University after earning his doctoral degree from the University of Colorado, USA. Prior to his present position, he was an instructor at the Metropolitan State University of Denver and the University of Colorado Denver. He also gained industry experience as a Senior Machine Learning Engineer at Moffett AI, a well-known Silicon Valley company. His primary research revolves around computer vision, where he conducts systematic theoretical research and practical applications, particularly in image segmentation, human posture estimation, behavior recognition and analysis, graph neural networks, and model compression. Dr. Pang's significant contributions have been applied practically in diverse sectors such as medicine, agriculture, and security, making a substantial impact in their intelligent evolution.
Embodied intelligence is reshaping surgical robotics by tightly coupling perception, decision-making, and dexterous action within the physical constraints of the operating room. This talk surveys cutting-edge advances that move beyond preprogrammed automation toward robots that can interpret multimodal intraoperative signals, reason under uncertainty, and adapt their motions in real time. We highlight progress in foundation-model–driven scene understanding, vision–language–action policies for tool and tissue interaction, and learning-based control that integrates tactile/force sensing with kinematic and visual feedback. Special attention is given to safety- and reliability-critical methods, including constraint-aware planning, uncertainty estimation, and verification-oriented training that reduce unexpected behaviors during delicate maneuvers. We also discuss system-level innovations enabling low-latency closed-loop operation, such as edge acceleration, modular policy architectures, and simulation-to-real transfer supported by physics-informed priors and domain adaptation.

Dr. Dongyang Kuang is Associate Professor at the School of Mathematics (Zhuhai), Sun Yat-sen University. Dr. Kuang's recent research interest is mainly on mathematical data analysis, modeling and interpretable algorithms/applications in interdisciplinary sciences. Dr. Kuang is currently funded by the National Natural Science Foundation of China and was participated in research projects funded by the Natural Sciences and Engineering Research Council of Canada (NSERC) and the U.S. Department of Energy (DOE). He has published research papers in various academic journals such as Pattern Recognition, SIAM Journal on Imaging Sciences, Thermochimica Acta, Applied Intelligence, Physics of Plasmas, and in international academic conferences and workshops organized by MICCAI and IEEE. Dr Kuang also serves as a reviewer for multiple journals such as IEEE TMI, IEEE JHBI, CMIG, MSLT, JNE, et al.
Emotion recognition from electroencephalogram (EEG) signals has emerged as a critical challenge in affective computing and brain-computer interfaces. Traditional approaches often overlook the complex causal relationships and information flow patterns between brain regions. In this presentation, we introduce a framework that integrates Liang-Kleeman information flow theory with graph neural networks (GNNs) for enhanced emotion recognition from EEG signals. The Liang-Kleeman information flow quantifies the rigorous causality between different EEG channels, capturing the directional information transfer across brain regions. By constructing dynamic brain connectivity graphs based on these causal relationships, our approach leverages GNNs to model spatial-temporal dependencies inherent in emotional states. We demonstrate that incorporating physically meaningful causal structures significantly improves classification accuracy compared to conventional correlation-based methods. Experimental results on benchmark EEG emotion datasets validate the effectiveness of the approach, showing superior performance in distinguishing different emotional states. This work bridges theoretical causality analysis and deep learning, offering new insights into emotion-related brain dynamics and advancing practical emotion recognition systems.

Dr. Xinguang Cui is an associate professor in the Huazhong University of Science and Technology, China. He is the lead of the Lab of respiratory multiphase flows. He received a bachelor degree in Beihang University, China at 2005, master degree in Tsinghua University, China at 2008 respectively. At 2012, he received his Ph.D degree in the subject of fluid mechanics from University of Heidelberg, Germany at 2012. Afterwards, he ever conducted research work related to numerical modeling the areas of fluid mechanics, atmosphere science and bio-energy in University of Heidelberg, Nanyang Technological University and Lawrence Berkeley National Lab Until 2019. Presently, he is interested in applying in numerical methods to solve industrial and scientific problems in the disciplines of medical industry, indoor environment and aerospace, specially related to respiratory multiphase flows. Until now, he has published more than 50 peer-reviewed journal paper, and more than 20 conference papers.
To gain a deep understanding of the physical mechanisms of inhaled particle dynamics in respiratory airflow is crucial for preventing viral transmission and reducing the hazards of air pollutants. In this study, multiphase large-eddy simulations were employed to quantify how tidal breathing and compliant airway walls collaboratively regulate the dynamics of micron-sized particles. Wall kinematics and transient respiration flow waveforms were derived from clinical data. The Euler-Lagrangian framework was enhanced through regularized variational subgrid closure and stochastic reconstruction of unresolved velocity fluctuations. To further elucidate these complex interactions, data-driven approaches such as machine learning (ML) can be utilized to predict particle deposition patterns based on multiphysics simulation data, providing a powerful tool for forecasting individual exposure risks. The results reveal that the characteristics of respiratory turbulence are significantly influenced by elastic airway wall motion, while their topological structure exhibits remarkable frequency robustness. Additionally, airflow variations induced by airway wall motion lead to unexpected deposition hotspots for small micron particles (1 and 5 µm), with regional deposition efficiencies in the distal airways increasing by up to 85 times. Importantly, particles inhaled at different moments experience varied final fates, and elastic airway wall motion delays the peak deposition timing for these small micron particles. Although respiratory frequency influences deposition intensity, the fundamental deposition patterns remain robust across different frequencies. Furthermore, developing AI surrogate models trained on simulation results holds promise for rapidly and accurately predicting particle fate across diverse physiological scenarios.

Yali Yuan (Member, IEEE) received the M.Sc. degree from the University of Lanzhou, Lanzhou, China in 2015, and the Ph.D. degree from the University of Gottingen, Germany in 2018, where she is currently working as a Postdoctoral Fellow. Apart from the individual research work, she is also responsible of preparing project applications, mentoring Ph.D. candidates and bachelor's/master's students, teaching, conducting seminars and handling industrial partners. Her research interests include various topics related to wireless networks and security.
As large language models (LLMs) enter real-world workflows, their value is gradually shifting from one-shot generation toward sustained support for end-to-end processes. This shift has driven the rapid development of LLM-based agents and multi-agent systems. Overall, existing research has made substantial progress in improving the automation and reusability of collaboration, and has increasingly elevated the question of how models are organized and invoked to a level comparable to that of model capability itself.
However, as multi-agent systems are applied to larger-scale, more complex, and more security-sensitive tasks, a new class of systemic bottlenecks has begun to emerge. This talk reviews and analyzes the development trajectory of LLM-based multi-agent research from a system-level perspective, and argues that current limitations in performance and robustness do not primarily stem from the reasoning ability of individual agents or from topology design techniques, but rather from the long-standing assumption of sequential communication and execution. To overcome these scalability bottlenecks, it is necessary to move beyond “optimizing a better DAG” and instead rethink the communication and execution paradigm itself.
Motivated by this perspective, we introduce our work, MPAS, which draws inspiration from message passing in graph representation learning and reformulates multi-agent collaboration from a sequential dialogue process into a node-wise parallel procedure of message generation, aggregation, and synchronous state updates. By removing the reliance on topological ordering, MPAS opens up a broader design space for general communication topologies and stronger scalability and robustness.

Peng Jiang is an Associate Professor at Department of Industrial Engineering and Management, Business School, Sichuan University. He obtained a PhD from Shanghai Jiao Tong University (SJTU) and participated in a joint PhD program at National University of Singapore (NUS). Prior to joining in the current affiliation, he successively held positions as Postdoctoral Fellow at SJTU, Research Fellow at NUS Environmental Research Institute (NERI), and Research Scientist at A*STAR's Institute of High Performance Computing, Singapore. His research focuses on systems modeling and intelligent algorithms. He has published in journals including Engineering, Environmental Science & Technology, Water Research, European Journal of Operational Research, and Decision Support Systems. His work has been cited in 27 international policy documents, including reports by the United Nations (IPCC Climate Change Reports), the World Bank, the WHO, and the United Nations Environment Programme. He served as conference co-chairs and session chairs for several international conferences. He ranked as the top 2% Scientists by Stanford University (2024-2025).
Artificial intelligence generates spatiotemporal data across numerous domains, including the emerging field of waste segregation. Reliable spatiotemporal data is crucial for analyzing household waste segregation behaviors. However, these collected and assessed data may exhibit deviations due to multidimensional uncertainties, such as policy changes, environmental variations, influences from surrounding areas, and miscalculations from monitoring systems. Traditional data correction methods often rely on correlated variable data or are only applicable to small-scale, dense datasets. To address these limitations, we have developed a correction approach with the support of spatiotemporal Bayesian-based machine learning modeling. This approach encompasses the assessment of household participation behavior in waste segregation, the correction of its deviations, and the precise identification of deviated areas and time periods. We applied the approach to the spatiotemporal data of a megacity, validating its effectiveness. This study provides valuable decision-support and methodological references for waste segregation management in megacities.

Dr. Yang Li is an Associate Professor at the School of Electronic Science and Engineering, Nanjing University. He received his Ph.D. from Nanjing University and conducted postdoctoral research at Northwestern University, USA. His research focuses on computer vision for 3D environmental perception and understanding, with applications in robotics and autonomous driving. His key contributions include efficient stereo matching algorithms for video, real-time 3D reconstruction on embedded platforms, LiDAR point cloud processing, and visual SLAM.
Accurate 360° depth perception is crucial for real-world applications such as autonomous vehicles and robotics, which rely on surround-view fisheye cameras. This lecture presents a novel framework that addresses the key challenge of missing semantic reference in omnidirectional depth estimation. Our method innovatively fuses variance-based geometric constraints with mean-based semantic features to build a robust matching volume. We further introduce a multi-task network that jointly predicts the depth map and the central panorama, enabling explicit

Guanghui Zhu is a Tenure-track Assistant Professor in the School of Computer Science at Nanjing University. He received the Ph.D. degree from Nanjing University in 2020. He has been selected as an outstanding doctor of Jiangsu Computer Society, Jiangsu Province "Innovation and Entrepreneurship Doctor" program, Huawei "Spark Award". He is also Secretary-General of the Jiangsu Computer Society Big Data Committee, CCF YOCSEF Nanjing Chairman,《BDMA》Youth Editorial Board. His research interests include big data and intelligent computing, automated machine learning, and LLM fine-tuning. He has published more than 30 papers in leading conferences/journals, such as ICML, NeurIPS, ICLR, ACM SIGKDD, IEEE TKDE. He has won 9 international awards in AutoML competitions organized by top international AI conferences such as NeurIPS and KDD, and won the gold award of 5th China International College Students' "Internet+" Innovation and Entrepreneurship Competition.
With the rapid development of big data, large-scale computing, and foundation models, the AI learning paradigm has shifted from domain-specific model training to pre-trained foundation models with domain task fine-tuning. Building and deploying domain-specific large models has become a key trend for real-world AI applications, yet it remains challenging due to high technical barriers and heavy reliance on expert knowledge.
This talk explores AutoLM (Automated Large Models) as an extension of AutoML (Automated Machine Learning), aiming to automate the construction and deployment of domain-specific large models. We focus on automatic prompt optimization, parameter-efficient fine-tuning, enabling more efficient, intelligent large model deployment. Our goal is to reduce development costs and accelerate the practical adoption of large models in industrial domains.