GuardAgent is introduced as the first LLM agent acting as a guardrail for other LLM agents, enhancing safety and trustworthiness. It utilizes knowledge-enabled reasoning to understand guard requests and generate reliable guardrails. The proposed benchmarks show GuardAgent’s effectiveness with high accuracy in moderating invalid inputs and outputs for different types of agents.