In many engineered systems, control is not carried out by a single authority but instead through a hierarchy. A central player may set the rules or policies, while lower-level actors such as households, vehicles or subsystems respond in turn. Yet much of today’s artificial intelligence research assumes that all agents are equal, operating at the same time with the same access to information. That assumption can work for theoretical models but does not reflect the realities of power grids, transportation systems or autonomous fleets, where decision-making is uneven and often limited by bandwidth or partial information.
Zhen Ni, Ph.D., senior author, IEEE senior member, and an associate professor in the Department of Electrical Engineering and Computer Science at Florida Atlantic University, has proposed a new AI framework that takes this imbalance into account. Their model builds on reinforcement learning but incorporates hierarchical decision-making through a game theory structure known as the Stackelberg-Nash game. In this setup, a leader makes the first move, while followers adapt their actions in response. This mirrors real-world systems such as utilities and consumers or traffic lights and vehicles.
Zhong, X., & Ni, Z. (2025). Intelligent Control in Asymmetric Decision-Making: An Event-Triggered RL Approach for Mismatched Uncertainties. IEEE Transactions on Systems, Man, and Cybernetics: Systems, 55(10), 7288–7301. https://doi.org/10.1109/TSMC.2025.3583066
The team’s approach adds an event-triggered mechanism that only updates decisions when necessary rather than at every time step. This reduces computational load and conserves resources while maintaining stability and performance. Simulation studies have shown that the method can guide decision-makers toward equilibrium strategies while avoiding unnecessary updates, offering both robustness and efficiency.
Xiangnan Zhong, Ph.D professor in the Department of Electrical Engineering and Computer Science at Florida Atlantic University, stated,
“Instead of constantly updating decisions at every time step, which is typical of many AI systems, our method updates decisions only when necessary, saving energy and processing power while maintaining performance and stability”.
The significance of this research lies in its ability to handle mismatched uncertainties. Real-world systems rarely operate on perfect or shared information; one player may have access to detailed data while another must act with limited knowledge. By designing an AI method that can accommodate these disparities, the researchers move closer to practical tools for infrastructure where reliability and efficiency are critical.
The framework also represents an attempt to merge advances in reinforcement learning with principles from control theory. Past studies have examined multi-agent reinforcement learning and event-triggered control separately, but this work combines the two to address asymmetric and uncertain environments. It marks a step toward making AI methods more suitable for engineered systems that cannot be simplified into clean, symmetric models.
Challenges remain before such an approach can be deployed in operational settings. Scaling up to city-wide power grids or large transportation networks may reveal computational or communication hurdles. Real systems also introduce noise, delays, and shifting dynamics that can test the limits of theoretical models. Nevertheless, the researchers are already looking ahead to larger trials and long-term integration with energy management, traffic control, and autonomous systems.
The work highlights a growing trend in artificial intelligence: moving beyond idealized scenarios and toward frameworks that reflect the messy, resource-constrained, and hierarchical nature of real-world infrastructure. By embedding structural reasoning and adaptive updating into reinforcement learning, the Florida Atlantic University team has provided a path forward for intelligent control that may prove valuable across sectors where stability, efficiency, and resilience are essential.

Adrian graduated with a Masters Degree (1st Class Honours) in Chemical Engineering from Chester University along with Harris. His master’s research aimed to develop a standardadised clean water oxygenation transfer procedure to test bubble diffusers that are currently used in the wastewater industry commercial market. He has also undergone placments in both US and China primarely focused within the R&D department and is an associate member of the Institute of Chemical Engineers (IChemE).