Multi-Agent Mission
For complex multi-step missions, a single agent quickly hits limits — it must handle high-level planning and low-level control simultaneously, filling its context with irrelevant details.
Splitting into two agents solves this:
- Planner keeps a clean, high-level view of the mission. It decides what to do next.
- Executor focuses on physical control — navigation, camera modes, manipulation. It decides how to do it.
Each agent uses a model tuned for its role: the Planner uses a slow, deep-thinking model; the Executor uses a fast, reactive model. This keeps latency low and reasoning quality high.
How it works
Section titled “How it works”The Planner receives the mission goal and breaks it into subtasks.
For each subtask, the Planner calls
execute_subtask— this delegates control to the Executor and blocks until it finishes.The Executor navigates, manipulates objects, and calls
finish_taskwhen done or stuck. The report is returned to the Planner.The Planner reads the report and plans the next step.
Example script
Section titled “Example script”See examples/4_xlerobot_multiagent_cooperation.py for the full example. Key parts:
Executor — fast reactive model
Section titled “Executor — fast reactive model”executor = XLeRobotAgent( model="google_genai:gemini-3-flash-preview", thinking_level="minimal", # fast decisions tools=[move_forward, turn_left, turn_right, pick_up_tissue, finish_task, ...], system_prompt=controller_prompt, ...)Planner — strategic reasoning model
Section titled “Planner — strategic reasoning model”execute_subtask = create_execute_subtask(executor)
planner = LLMAgent( model="google_genai:gemini-3.1-pro-preview", thinking_level="high", # deep reasoning tools=[look_around, execute_subtask, finish_task], system_prompt=planner_prompt, ...)
planner.task = "Find tissues on the table, pick them up, find the trash bin and throw them away."planner.go()