Adaptive Federated Reinforcement Learning Framework for Secure and Efficient Traffic Management in VANET-Enabled Smart Cities
Main Article Content
Abstract
To ensure smooth accessibility, security, and efficiency, the increasing complexity of traffic management in Vehicular Ad Hoc Networks (VANETs) within smart cities necessitates flexible and secure systems. Existing centralized methods face challenges related to flexibility, communication delays, and information security. To address these issues, this study proposes an Adaptive Federated Reinforcement Learning (AFRL) framework that leverages deep reinforcement learning (DRL) for dynamic traffic optimization and federated training to enable privacy-preserving information processing through distributed VANET nodes. By utilizing vehicle data to train local models and extracting insights without compromising information security, the proposed framework dynamically adapts to real-time traffic conditions. An enhanced secure aggregation technique and a flexible incentive strategy optimize traffic signal control and congestion management, addressing key challenges such as communication overhead, model convergence, and adversarial information attacks. The primary objectives of VANETs in intelligent urban environments include improving traffic flow, reducing delays, and minimizing collision risks. Studies demonstrate that the AFRL framework is an adaptable and reliable solution for smart transportation planning in modern cities, achieving a 20% reduction in average vehicle wait times, a 15% increase in traffic efficiency, and enhanced resilience against adversarial attacks.
Article Details

This work is licensed under a Creative Commons Attribution 4.0 International License.