EmbodiedAgents¶
The intelligence layer of EMOS – production-grade orchestration for Physical AI
EmbodiedAgents enables you to create interactive, physical agents that do not just chat, but understand, move, manipulate, and adapt to their environment. It bridges the gap between foundation AI models and real-world robotic deployment, offering a structured yet flexible programming model for building adaptive intelligence.
Production-Ready Physical Agents – Designed for autonomous systems in dynamic, real-world environments. Components are built around ROS2 Lifecycle Nodes with deterministic startup, shutdown, and error-recovery. Health monitoring, fallback behaviors, and graceful degradation are built in from the ground up.
Self-Referential and Event-Driven – Agents can start, stop, or reconfigure their own components based on internal and external events. Switch from cloud to local inference, swap planners based on vision input, or adjust behavior on the fly. In the spirit of Godel machines, agents become capable of introspecting and modifying their own execution graph at runtime.
Semantic Memory – Hierarchical spatio-temporal memory and semantic routing for arbitrarily complex agentic information flow. Components like MapEncoding and SemanticRouter let robots maintain structured, queryable representations of their environment over time – no bloated GenAI frameworks required.
Pure Python, Native ROS2 – Define complex asynchronous execution graphs in standard Python without touching XML launch files. Underneath, everything is pure ROS2 – fully compatible with the entire ecosystem of hardware drivers, simulation tools, and visualization suites.
What You Can Build¶
Speech-to-text, LLM reasoning, and text-to-speech pipelines for natural dialogue.
VLMs for high-level planning and VLAs for end-to-end motor control.
Map encoding and spatio-temporal memory for context-aware movement.
Dynamically route information between perception, reasoning, and action based on semantic content.
Next Steps¶
AI Components – The core building blocks: components and topics.
Inference Clients – How inference backends connect to components.
Models – Available model wrappers and vector databases.