Back to Experience
CMU Safe AI Lab, Prof. Ding Zhao

LLMs for Swarm Robotics

CMU Safe AI Lab, Prof. Ding Zhao

Pittsburgh, PA · August 2024 - May 2025

Enhanced multi-agent coordination by integrating LLMs into Hierarchical RL policies

LLMs Multi-Agent Systems Reinforcement Learning Swarm Robotics AI Safety

Research Overview

Investigating the integration of Large Language Models (LLMs) into multi-agent robotic systems to enhance coordination, planning, and adaptability in swarm robotics scenarios.

Research Focus

Multi-Agent Coordination

Enhanced multi-agent coordination by integrating LLMs into existing Hierarchical Reinforcement Learning (HRL) policies, enabling:

  • Natural language-based task specification
  • Improved inter-agent communication
  • Adaptive behavior in dynamic environments

Safe AI Lab

Conducted research under Prof. Ding Zhao at Carnegie Mellon’s Safe AI Lab, focusing on:

  • AI safety in multi-agent systems
  • Verification and validation of learned behaviors
  • Robust coordination under uncertainty

Technical Approach

  • Hierarchical Reinforcement Learning (HRL) for task decomposition
  • LLM-based planning and coordination
  • Multi-agent simulation environments
  • Safety-critical decision making

Impact

Exploring how language models can improve the interpretability and flexibility of multi-agent systems while maintaining safety guarantees—a critical challenge for deploying autonomous systems in real-world scenarios.