MSCS 2024 Technical Program (in Japanese)
A floor plan is available here.
Information for speakers
For each presentation, a time slot of 20 minutes is assigned, which is for a 15-minute presentation and a 5-minute discussion as a standard.
No PC is prepared in the session room. A projector in the session room has VGA (15-pin D-sub) and HDMI ports. Please bring a PC and an appropriate adaptor for yourself. It is recommended to test to connect your PC to the projector before the session.
Program at a Glance | ||
---|---|---|
Monday, March 18, 2024 | ||
1M1 Intelligent Control Room 1, 218, 9:30-11:30 |
ISCS Plenary Lecture Coordination of heterogeneous multi-agent systems via blended dynamics theorem Lecture Room 257 in School of Economics, 12:40-13:30 |
1A1 Control Application Room 1, 218, 14:00-16:00 |
Tuesday, March 19, 2024 | ||
2M1 Optimization and Optimal Control Room 1, 218, 9:30-11:30 |
2A1 Control Theory Room 1, 218, 12:40-15:00 |
|
Wednesday, March 20, 2024 | ||
3M1 Modeling, System Identification and Estimation Room 1, 218, 10:40-12:40 |
ISCS Plenary Lecture Learning-based decision and control systems for high-level self-driving intelligence Lecture Room 257 in School of Economics, 13:50-14:40 |
Heterogeneous multi-agent systems can achieve approximate consensus through various methods, such as large coupling gain, the signum function, frequent updates, nonlinear funnel gain, among others. When diverse dynamics are compelled to synchronize under a coupling condition, they exhibit emergent behavior characterized by blended dynamics. This phenomenon is crucial in designing heterogeneous multi-agent systems, where each agent collaborates towards a unified goal. Such multi-agent systems or algorithms benefit from stability exchange among agents, an initialization-free nature allowing seamless integration or disengagement of agents, and resilience to production flaws and disturbances in node dynamics. Additionally, for each agent to undertake distinct tasks while aligning with others for a collective objective, certain internal variables of each agent must reach consensus, and the agreed-upon value should reflect some information about individual agents. We substantiate this by demonstrating that the internal variable acts as the Lyapunov function for decentralized control and as the Lagrange multiplier in the distributed optimization of resource allocation problems. We also discuss adaptation of node dynamics to achieve zero-error consensus.
Hyungbo Shim received his B.S., M.S., and Ph.D. degrees from Seoul National University, Korea, and held a post-doctoral position at the University of California, Santa Barbara until 2001. He joined Hanyang University in Seoul in 2002. Since 2003, he has been with Seoul National University, Korea. He has served as an associate editor for Automatica, IEEE Transactions on Automatic Control, International Journal of Robust and Nonlinear Control, and European Journal of Control, as well as an editor for International Journal of Control, Automation, and Systems. He is serving as the general chair for IFAC World Congress 2026. His research interests include stability analysis of nonlinear systems, observer design, disturbance observer, secure control systems, and synchronization in multi-agent systems. He is a senior member of IEEE and a member of Korean Academy of Science and Technology.
Today’s autonomous driving system are facing severe challenges in highly dynamic, random and dense traffic scenarios. Existing hierarchical design method, for example, that with rule-based decision and linear motion controller, is lack of sufficient adaptability and flexibility. As a biologically inspired artificial intelligence, reinforcement learning (RL) is promising to provide the self-evolving ability for automated cars, which has the potential to generalize to unknown driving scenarios. This talk will discuss recent advances in learning-based autonomous driving systems for high-level self-driving intelligence. An interpretable and computationally efficient framework, called integrated decision and control (IDC), is proposed to fulfill more flexible functionality, in which the standard actor-critic architecture can be subtly utilized to train its decision and control neural networks. Some technical breakthroughs in safe reinforcement learning and high-fidelity simulator are also discussed for the purpose of training more accurate neural network controllers.
Dr. Keqiang Li is currently a professor at School of Vehicle and Mobility, Tsinghua University. He is the Academician of Chinese Academy of Engineering. He also serves as the director of State Key Laboratory of Intelligent Green Vehicle and Mobility, and chief scientist of National Innovation Center of Intelligent and Connected Vehicles. Dr. Li is an expert in the field of automotive intelligence. His main research areas include dynamic design and intelligent control of driver assistance systems / autonomous driving systems. He has authored about 250 journal/conference papers and over 80 patents in and outside of China. He has worked in Japanese and Germany automotive companies and academic institutions for many years including Tokyo University of Agriculture and Technology, The University of Tokyo, Aachen University of technology, National Traffic Safety & Environment Lab in Japan, Isuzu Automobile Corp, etc. Dr. Li is also the recipient of Changjiang Scholar Program Professor, China National Technological Invention Award, and China National Scientific and Technological Progress Award.