Document Type
Report
Publication Date
1-2024
Keywords
Reinforcement Learning, Multi-agent collaboration, emergency, airport evacuation
Abstract
This study adopted an A3C algorithm to simulate the evacuation process under different situations (e.g., multiple agents and different environmental conditions) and results were compared with Deep QNetworks (DQN), to demonstrate the efficiency and effectiveness of A3C algorithm use in evacuation models. Results indicated that under static environments, A3C demonstrated superior adaptability and quicker response times. Furthermore, with an increasing number of agents, A3C showed better scalability and robustness when managing complex interactions and provided quick evacuations. These outcomes highlight A3C's advantage over traditional RL models under varying and challenging conditions. The report concludes with a discussion of the practical implications and benefits of these models. It emphasizes their potential in enhancing real-world evacuation planning and safety protocols.
Recommended Citation
Zhou, Yujing, "Real-time Deep Reinforcement Learning for Evacuation Under Emergencies" (2024). Center for Advanced Transportation Mobility. 29.
https://digital.library.ncat.edu/catm/29
Dataset 1
NN accelerated GA (1).zip (40254 kB)
Dataset 2
Flight Rescheduling (1).zip (1244979 kB)
Dataset 3