Cognition Recipes Overview¶
Build intelligent agents from the ground up using EmbodiedAgents, the EMOS intelligence framework. These recipes introduce the core Components – the modular building blocks that drive your physical agents.
Every capability – hearing, speaking, seeing, thinking – is a component you wire together in pure Python. No ROS XML, no boilerplate.
Your first agent – wire STT, VLM, and TTS into a multimodal dialogue system.
Shape agent behavior with dynamic Jinja2 templates at the topic or component level.
Give your robot spatio-temporal memory backed by a Vector DB.
Navigate to locations from natural language commands.
Give the LLM access to executable functions so it can act on the world.
Route messages to different graph branches based on meaning, not topic names.
Combine perception, memory, and reasoning into a fully embodied system.