Document Type

Report

Publication Date

1-2024

Keywords

Reinforcement Learning, Multi-agent collaboration, emergency, airport evacuation

Abstract

This study adopted an A3C algorithm to simulate the evacuation process under different situations (e.g., multiple agents and different environmental conditions) and results were compared with Deep QNetworks (DQN), to demonstrate the efficiency and effectiveness of A3C algorithm use in evacuation models. Results indicated that under static environments, A3C demonstrated superior adaptability and quicker response times. Furthermore, with an increasing number of agents, A3C showed better scalability and robustness when managing complex interactions and provided quick evacuations. These outcomes highlight A3C's advantage over traditional RL models under varying and challenging conditions. The report concludes with a discussion of the practical implications and benefits of these models. It emphasizes their potential in enhancing real-world evacuation planning and safety protocols.

Share

COinS